# Coordinates of objects in real world

Hi,
I’m struggling with the following problem:
I playback the video files with fusion activated and I would like to understand how I can obtain latitude-longitude coordinates of the centroid of an object i see in the image.
Let’s say the centroid is located in pixel (px,py).
I retrieved it’s x,y,z coordinates with

``````measure = sl.Mat()
zed.retrieve_measure(measure,measure = sl.MEASURE.XYZ)
measure.get_data()[py,px][:3]
``````

How can I convert these to latitude longitude altitude coordinates?
I found the fusion.camera_to_geo function but which argument should I give?
If i do :

``````fused_position = sl.Pose()
geo_pos = sl.GeoPose()
fusion.get_position(fused_position)
fusion.camera_to_geo(fused_position,geo_pos)
``````

I should get the coordinates of the camera, right? But from that, how do i get the coordinates of the detected object?
Further more if I do

``````geo_pos.latlng_coordinates.get_coordinates(False)
``````

and compare it to:

``````current_geopose = sl.GeoPose()
fusion.get_geo_pose(current_geopose)
current_geopose.latlng_coordinates.get_coordinates(False)
``````

Shouldn’t I get exactly the same? For some reason they are slightly different.

Hello @Prospecto ,

We recently updated our documentation about global localization module. In your case the VIO / GNSS section should be useful.

In fact, the camera_to_geo function transform any transformation from camera coordinate frame to global coordinate frame. So if you input the position of objects referred to camera coordinate frame into this function, it will output the corresponding position referred to global coordinate frame.

Regards,
Tanguy

How do pass the position of object to camera_to_geo function? Camera_to_geo requires a pose object, so I assume I have to insert the x,y,z coordinates I get from the retrieve_measure into a Translation object and use it to initialize the pose object… is that correct?
Could you provide me a code snippet to help me understand how you would proceed?

Yes indeed this is correct. You also need to fill the timestamp attribute of your Pose object. Also becarefull that the translation you set is the translation referred to WORLD and not the translation referred to CAMERA.

Unfortunately we are not used to provide code snippet on forum. We generally prefer to direct users to the official SDK samples and documentation, as that ensures the information provided is up-to-date and comprehensive. However, we’d be happy to assist you further if you have any specific questions or need help with something more complex. Please feel free to ask, and we’ll do our best to provide a helpful response.

Regards,
Tanguy

I still have a couple of questions:

1. Why do I need to fill the timestamp in the PoseObject?
2. To init the pose i need a Transform. Is it enough to set the translation of the transform as the WORLD coordinates of the object or do I need to the set the rotation too?
3. How do I get the translation referred to the WORLD? In the retrieve_measure function I can specify the type of measure but there is no argument for the coordinate frame.
``````zed.retrieve_measure(measure,measure = sl.MEASURE.XYZ)
``````

Furthermore, what’s the difference between camera.retrieve_measure and fusion.retrieve_measure?

Hello @Prospecto ,

1. The timestamp field of `sl::Pose` will be filled into the corresponding `sl::GeoPose` this is why you need to set it. If you don’t the corresponding GeoPose will have timestamp equal to 0. This is better to set it but the SDK won’t break if you don’t.
2. Yes indeed you could set identity to the rotation part of your transform. Since you just need the lat/lng/alt position of your object you do not need to set rotation in the Pose sent to camera_to_geo. And yes indeed you need to send the position in WORLD coordinate frame otherwise the resulting transformation won’t be correct.
3. You could specify the coordinate system in which measure are return in the RuntimeParameter argument of the grab method. Especially with the measure3D_reference_frame attribute.

The distinction between zed.retrieve_measure and fusion.retrieve_measure lies in the entity computing the measure. In the former, the measure retrieved corresponds solely to the specified camera, whereas in the latter, it encompasses the fusion of measures from all cameras.

Regards,
Tanguy

Thanks a lot! Really nice and detailed answer
One last question, if I am postprocessing the data, I guess I could calibrate the transformation on the whole recording and use it from the beginning right? Would that be useful and how could I achieve it?

Yes indeed you could do this ! Once the calibration is computed you could ingest the previous object position in order to get their lat / lng / alt position

This is where timestamp could be usefull since you could save your position in list and then use this list for computing your previous object position.

Regards,
Tanguy

Thanks for all the support!

1 Like

While experimenting a bit with timestamps in the playback example i noticed something: the timestamps of zed camera and fusion are a bit different (some frames). Namely if I run:

``````zed_pose.timestamp.get_milliseconds()
zed.get_timestamp(sl.TIME_REFERENCE.IMAGE).get_milliseconds()
``````

I get the same results but if I run

``````fused_position.timestamp.get_milliseconds()
``````

I get slightly different timestamps (fusion is some frames late).
I don’t understand why they are not aligned (since I am using playback).
Netherless, when I pass the timestamp to the pose object before passing it to the camera_to_geo function, which timestamp should I pass? If I follow what seems logic to me I would pass the zed.get_timestamp(sl.TIME_REFERENCE.IMAGE) but just wanted to make sure.

Furthermore, in the example there’s this line

``````viewer.updateGeoPoseData(current_geopose, zed.get_timestamp(sl.TIME_REFERENCE.CURRENT))
``````

I don’t get why it uses the CURRENT time reference (instead of IMAGE) and why not using the fused_position (since as i mentioned before it has different timestamp from the zed timestamp)
I was expecting something like:

``````viewer.updateGeoPoseData(current_geopose, fused_position.get_timestamp(sl.TIME_REFERENCE.IMAGE))
``````

Lastly, in pose.init_transform() the timestamp required , is it in ms or nanoseconds? In the docs it’s not specified

Hello @Prospecto,

Indeed fusion require synchronization that introduce a bit of latency between sl::Fusion and sl::Camera. The latency should be something like 1 or 2 frames if you are in SHARED_MEMORY setup. If you want to understand the full pipeline, you could read our documentation.

Yes indeed you are right, it should be sl.TIME_REFERENCE.IMAGE and not sl.TIME_REFERENCE.CURRENT. Regarding your timestamp problem it should be the zed one if you are retrieving object from camera and fusion one if you are retrieving object from fusion. For accuracy, timestamp should be specified in nanoseconds.

Regards,
Tanguy