Transition from 2d coordinates to real xyz coordinates

I am trying to get 3d coordinates from (u,v) pixel coordinates.

I am trying to use the formulas in this link with fx,fy,cx,cy parameters from sl.cameraParameters. But fx,fy,cx,cy comes as 0. That’s why the formulas don’t work. Is it because I am trying to open the svo file that I recorded from my Zed x camera that the data appears as 0? Or is there another problem?

Hi @emrekocdemir
please post the code that you used to retrieve the camera parameters.
Have you read this support page?

Hi @Myzhar
I read it and shared the link to the same page above.

                                depth_data = image.get_data()
                                depth_data = depth_data.astype(np.float32)
                                camera_params = sl.CameraParameters()

                                c_x =
                                c_y =

                                depth_value = depth_data[v, u]  
                                Z = depth_value
                                X = ((u - c_x) * Z) / camera_params.fx
                                Y = ((v - c_y) * Z) / camera_params.fy

                                print(f"X: {X}, Y: {Y}, Z: {Z}")  """ ```

Hi @emrekocdemir,

You can retrieve camera calibration information from the following call:


This will return a CalibrationParameters object where you can retrieve left and right camera calibration values.

Hi @Myzhar,

I can access fx,fy,cx,cy values ​​with the method you shared, which is perfect. But the Z value appears to me as a matrix. For this reason, X and Y also become matrices. This is the code I use for getting the Z value.

image = sl.Mat()
depth_data = image.get_data()
depth_value = depth_data[u,v]
Z = depth_value

Am I calculating it wrong ? Is there a another way ?

[ 44  77  76 255]

This is what I am getting as Z value.

Where does image come from?

You must use retrieve_measure(measure = MEASURE.DEPTH).

Thank you for your help, but I don’t quite understand how to use it. Can you explain in a little more detail how to calculate the Z value?
I tried the code in this link but it doesn’t look right. And it does not use fx fy cx cy in this code. My main goal is to detect the position of a hand-held object. I get the pixel position correctly and when I compare the position of the hand and the position of the detected object using bodies.body_list[0].keypoint[9], the position from the link comes out very different.

@emrekocdemir this tutorial can help you better understand the command:

We need your code to understand what’s wrong with it. Can you post it?
If you cannot share it publicly please write an email to

There is no problem using the code in this link(point cloud).
But I want to use and compare the formulas in this link, so I need to be able to calculate the Z value. Can you help me with that, if possible?

The “Z” value it the depth values obtained by calling:

err, depth_value = depth_map.get_value(u, v)

where depth_map is initialized with this command after each grab:

zed.retrieve_measure(depth_map, sl.MEASURE.DEPTH)

I compete the values from depthmap and keypoint but there is a big difference.

I am comparing the keypoint 9 and the white dot as you can see in above. But there is problem with datas.

-1.854631374627607 -1.209342655870868 2.7277729511260986   (this is from depth pixel calculation fx,fy,cx,cy ,Z,u,v)
[-1.1413424   0.66083884 -2.2877834 ](this is from direct keypoint 3d coordinates)

as you can see there is anormaly. pixel values should be very close to the 3d keypoint coordinates.

@emrekocdemir We can’t identify the issue with your code without having access to it.

How are you visualizing the values of the calculation from formula and real one ? Share more code information regarding visualization.

You can find many examples on GitHub.