Set explicit pose to Camera

Hi,

I’m using a Zed2i mounted on a vehicle for spatial mapping. I have an external GPS mounted alongside ZED with precise information about its x-y-z position. Is there any way to overwrite Zeds position with these GPS positions? I want to keep the rotations calculated from Zeds IMU, but instead of using its accelerometer to determine the translation, I want to use my GPS coordinates as they are more accurate (I noticed that the generated mesh often displays a substantial amount of drift, and this might be a possible solution). What I’m looking for is essentially a “Zed.setPosition(Pose)” as a counterpart to the avaliable “Zed.getPosition()” - does anyone know if this is this possible?

Hi, im having the same problem as well, where i have a robot pose which we consider it as the ground truth but the zed2 camera that is attached to the robot gives different pose which give false position of the detected object in the world coordinate frame.

There is a ROS service called setPose which you can use to set the zed camera pose. the roslaunch command is
roslaunch zed_wrapper zed2.launch

My solution to fix my issue is that by having the pose of the robot (ground truth) and the camera pose as well as the object position in the world coordinate frame captured by the camera, I calculate the difference in the x,y,z (object position and camera pose) and add those difference with the robot pose to compensate the noisy camera pose.

1 Like

Thanks for your reply! The solution I landed with was to call Camera.resetPositionalTracking(ground truth pose) periodically. Haven’t tried it authentically yet, with real GPS data, but at least it seems to update the pose

I am working on a similar application and I am trying to find a solution also for the potential drift on long distances/timespans for IMU data. We try to geolocate objects by associating the ZED SDK with a GPS localization.

I have tried to think about integrating your solutions for our case study but I see some issues that will affect your result.

The REFERENCE_FRAME::WORLD of the camera has x and y axis related to the initial position of the camera. To use any coordinate extracted from this frame in a coordinate reference system, it would need first a linear transformation, so that the z axis matches the north axis. I do not think I have found no information about this in the ZED SDK documentation.

In addition to that, the converted vector would need a translation after the transformation, to match the change of origin between the two reference system. The 0,0 original position needs to be changed to the actual GPS position of the camera at the same time. This translation cannot be done before the transformation.

The solution of Krisna would not work, because after subtracting the vector the linear transformation needs to be done to take the change of axis into account.

The solution that funcfunc2 is planning to do also implies doing the translation before the linear transformation, which will give a wrong result.

Maybe I missed something about this process, if you have any information about this Id be happy to hear about it. If I get an answer on my side I will post about it.

Or does the camera automatically handles that? I know that there is a magnetic_heading function available (although not very accurate)