Hi，I don’t know if the point cloud data acquired by the camera is arranged according to the shooting frame, or if there are rows and column indexes. Specifically, I want to perform coordinate conversion first, convert the point cloud data to the XY plane with the floor as the coordinate system, and then extract the z-axis coordinates of the point cloud and arrange them according to the shooting screen. Please tell me how to achieve it, thank you very much.

In addition, I have another problem, that is, when I run the plane_detection.py, when I press the “p” key or the space bar, the plane is not detected, and I also want to know how to get the camera pose or plane equation under the plane to complete my previous problem.

(Here shows that I uploaded the image error, anyway, I can’t detect the plane)

In addition, the latest example does not support zed2i+sdk3.8.2.

Hi @zore017
the point cloud information is arranged in a sl::Mat with size WxH.
Each “cell” with map coordinate (u,v) of the point cloud corresponds to the same pixel with coordinate (u,v) of the corresponding depth map and of the corresponding color image.

However, to convert a 3D point cloud pixel to the respective (u,v) image coordinate you can use the formulas reported in this support page:

When you press the space bar you must be sure that the camera is seeing a big portion of the floor to detect it.
When you hit ‘p’ you must be sure that the cross is pointing to a valid surface.
You can also click with the mouse to detect a surface in the point where you click.

Thanks for your reply, in this link, i.e. I can calculate the image coordinates (u, v) corresponding pixels directly from the three-dimensional coordinates of the point cloud (X, Y, Z) right? Am I understanding correctly.