Multi-Camera Fusion Help: Two ZED cameras (one facing up, one facing down, non-overlapping) on Jetson — how to align point clouds and detections into one coordinate system?

I’m currently working on a drone / inspection project equipped with two ZED cameras (ZED X / ZED 2, etc.) — one camera is facing upward (to detect high obstacles or sky objects), and the other is facing downward (for ground or near-ground targets).

Both cameras are connected to the same Jetson device (e.g., Jetson Orin NX).

The goal is to fuse the measurements from both cameras — point clouds, object detections, and distance estimations — into one common reference frame (the drone body or world coordinate system), so that we can perform unified spatial analysis and collision or clearance evaluation.

However, in our setup, the two cameras have almost no overlapping field of view, which makes the usual stereo or multi-camera calibration process difficult.

I haven’t tried any existing multi-camera calibration methods yet, because I’m not sure what’s feasible in a non-overlapping case.
So before I proceed, I’d like to ask for advice from the StereoLabs team or anyone who has experience with similar setups.

Hi @Minion-s,

This is definitely a challenging subject to compute the extrinsics of non-overlapping cameras. As you’ve mentioned, this feature is not available in our calibration tools, but we are currently looking into solutions to provide multi-camera calibration for any configuration out of the box. This is currently in development so I can’t share too much of a timeline here.

In terms of multi-camera calibration, I would recommend looking into “sparse reconstruction”, and in particular this repository: colmap, which can help.

Other solutions propose using a 3rd mobile camera to match keypoints from both cameras, this can be an option for you as well.