I am currently using 4 zed cameras to capture human body movements and want to fuse a piece of point cloud of human body. It seems that this function is still under development: Fused Point Clouds
I would like to ask in the current SDK, how I can finish this approach?
Yes, the fused point cloud is not yet available in the Fusion module of the ZED SDK.
I suggest you explore Iterative Closest Point (ICP) algorithms to create a fused point cloud from the 4 cameras.
OpenCV, Open3D, and PCL are three libraries that provide different versions of this point cloud fusion methods.
Thank you for your reply. I feel like once we get the relative position between the different cameras (or the world coordinates of each camera), it should be easy to fuse the point cloud.
Can I get such information from zed sdk, for example, the fusing API?
No, you must pass that position to the Fusion API, not vice-versa.
You can use ZED360 to calibrate, or an Aruco Tag: GitHub - stereolabs/zed-aruco: ZED SDK samples using ArUco tag detection
Oh I should be able to get the extrinsic of each camera from the configuration file output by ZED 360: Fusion | Stereolabs Thus, I should be able to use this information to fuse point cloud. Is it correct?
Yes, that’s correct. You can use this information as a prior for ICP.
i succeeded in creating 2 cameras json multicam config file . i now want to use it in order to get the correct point cloud from the ros2 wrapper, is there something ready or i need to implement the transformation myself?
The ZED SDK does not yet output the multicamera fused point cloud, so you must write your node that subscribes to multiple point cloud topics and fuse them.
The ROS 2 Wrapper with multi-camera Fusion support will be released in the next few months.