I am trying to get a fused pointcloud from multiple zed cameras and stream it in realtime over the network Unity. What is the easiest way to approach this using the SDK?
I noticed the Multi Cam Body Tracking samples do not contain any streaming code and samples that do stream each camera separately.
Would the best approach be to use the spatial mapping module?
The ZED SDK does not provide point cloud fusion capabilities, so the Unity plugin does not have this feature.
This is something different. The ZED SDK can fuse Body Tracking information coming from multiple cameras.
Spatial Mapping is not the right tool here; it accumulates a single camera’s depth into a per-camera fused map over time. It doesn’t fuse across multiple simultaneous cameras.
I understand that point cloud fusion is quite difficult to achieve. Are there any plans to add this in the future?
For now I might try and simply overlay the independent clouds from the camera’s known positions. Is there currently any way of getting multiple camera point clouds into Unity short of modifying the ZedManager to allow multiple instances?