How to Build a Single Fused Point Cloud from Multiple Simultaneously Captured SVO Files?

Hi ZED team and community,

I’m working on a multi-camera setup using multiple ZED cameras (zed 2i) that recorded data simultaneously using the multi-camera recording example provided in the SDK. As a result, I now have multiple .svo files — one from each camera — all captured at the same time but from different perspectives.

My Goal:

I would like to build a single fused point cloud (preferably in world coordinates) from these SVO files, essentially reconstructing a more complete 3D scene from multiple viewpoints. Think of it like merging the depth/point cloud data from multiple cameras into a unified 3D model.

  • Is there a pre-existing tool, script, or workflow in the ZED SDK to load multiple SVO files and generate a fused global point cloud or mesh?
  • If not, would it be possible (or recommended) to simulate a live stream from .svo playback and feed it into the ZED Fusion pipeline?
  • How can we synchronize and register the individual point clouds properly — do we need to provide extrinsic calibration between the cameras manually?
  • Is there an official or community-supported best practice for doing this?

Thanks in advance.

Hi @nitingarg
This feature is not available in the ZED SDK.
You can use external libraries for Point Cloud manipulation (e.g. Open3D, PCL) and use the implementation of the ICP (Iterative Closest Point) algorithm and variants that they provide.

Hi @Myzhar,

Thank you for your response.

As a follow-up, I would like to better understand the role of existing ZED tools in this context:

  1. I understand that the SDK does not directly support fusing multiple SVO files into a single point cloud, but could you please clarify the role of the ZED 360 Calibration Tool?

Would this calibration be applicable when post-processing previously recorded SVO2 files?

  1. Spatial Mapping + Multi-Camera Setup::
    What is the multi-camera spatial mapping module’s intended use case? Does it rely on a live setup only, or can we simulate the same workflow using SVO file playback?

  2. Is there any ZED Fusion example or guidance for using the ZED 360 calibration output to assist with aligning the maps/point clouds from multiple cameras?
    Any clarification or best practices around using these tools for offline processing and multi-view fusion would be greatly appreciated.

Thanks again!

This is used to set up a multi-camera outside-in system. With multiple static cameras looking at the same point of an environment to create a 360° view of it.
Read more here.

Yes, you must use the Fusion module of the ZED SDK.

Where did you read of this module?
This is not available in the ZED SDK. It will be provided in the future by Terra AI.

See the link above.