3D Mapping with zed_multi_camera and ROS2

Hi all,

After setting up the urdf and pose tracking with the urdf (information about the position and orientation of the cameras), I was wondering if the topic mapping/fused_cloud should not be aligned based on the defined camera extrinsics (camera position w.r.t the “master-camera” or base link). When showing the fused_cloud in rviz, it seems that the orientation is not correct.

Do I have to do the alingement by myself, e.g. in a custom ROS2 node, in order to get a 360 degree mapping?
And want is the difference between the topic mapping/fused_cloud and /point_cloud/cloud_registered?

Hi @robinvetsch
Each camera provides its fused point cloud.
There’s no algorithm running in the ZED SDK or the ZED ROS2 Wrapper that performs the complete fusion of all 4 clouds.
In order to get good results, you need a good extrinsic calibration of the 4 cameras’ position and orientation, and at the end, you must use an ICP-like algorithm to create the final fused cloud.

ICP stands for Iterative Closest Point; you can find many variants in the PCL and Open3D libraries.

1 Like

HI @Myzhar

okay, all clear! Thank you very much