Hi ,
I just need to know what is the reference frame for the fused point cloud data ,
After investigating in zed ros2 wrapper , while publishing point cloud data, frame id = “map” ,which is at the bottom base center of the zed . Is it the same reference frame for the point cloud when retrieved through zed-sdk ?
Hi ,
I am in a position where i have to use point cloud data from the zed 2i for obstacle avoidance and other applications , for this i need the reference frame for the fused point cloud .
Hi @mathanprasannakumar1,
The point cloud in the ZED SDK is retrieved by default in the REFERENCE_FRAME::WORLD, which initializes by default at the position of the left CMOS sensor of the camera, and remains stationary, as the camera moves.
You can retrieve the point cloud in the REFERENCE_FRAME::CAMERA as well, and the point cloud will be retrieved relative to the reference frame of the left CMOS sensor.
Please find information on retrieving point cloud data here: Using the Depth Sensing API - Stereolabs
Please be aware that the FusedPointCloud
object is retrieved from our SpatialMapping module, more information about this here: Spatial Mapping Overview - Stereolabs
Hi @mattrouss ,
Thanks for the reply and clarification.
You have mentioned that , point cloud can be retrieved relative to the world frame and camera frame . but these configuration are not available.
From the code snippet below , i can understand that point cloud data can be retrieved relative to camera frame , as mentioned in the depth sensing docs.
zed.retrieveMeasure(point_cloud, MEASURE::XYZRGBA, MEM::GPU, width, height);
can you mention what is the difference between the point cloud retrieved through depth sensing and spatial mapping ?
I am little bit confused here as in spatial mapping ,fused point cloud data also has the euclidean xyz of a point in space.
You can choose to retrieve the point cloud in CAMERA or WORLD reference frames using the RuntimeParameters
attribute measure3D_reference_frame
. The RuntimeParameters are to be provided to the grab()
method.
The point cloud retrieved with depth sensing is the point cloud computed at a given time by our depth algorithm.
The spatial mapping module uses both our depth sensing and positional tracking algorithms to fuse multiple point clouds together, and output a single 3D representation of the environment, retrievable as either a Mesh
, or a FusedPointCloud
.
1 Like
Thanks for the support @mattrouss , now i got a clear idea .