Pose fusion estimation using multiple cameras

Hi there,
Just for clarification, I couldn’t find a clear answer to this question.

Is it possible to do pose estimation fusing information from multiple cameras? For example I have different types of cameras, zed x, zed mini x, zed2i. Can I attach zed x and zed x mini to one Jetson Orin, they face to different directions rigidly mounted on the frame of the flying drone, so can I do a fusion of pose OR odometry OR point cloud using recent ZED SDK?

If so, is there any guide regarding the fusion API, because the one which is described for ZED360 is not working as I think it should work.

If the things I mentioned are not existing or have different concept perspectives, when in future can I expect updates regarding these features?

@Myzhar would you please discuss regarding this matter. Thank you.


What you explain is perfectly possible in the Fusion API, available in early access with 4.0.x.
ZED 360 is focused on Body Tracking, which is not what you need. However, You can check our geotracking sample. These are made to ingest GNSS data and fuse it with the odometry of one or more ZED camera(s). You can also fuse the odometry of the ZEDs without a GNSS.

The problem you’ll have it that you need a fusion configuration (documented here Fusion | Stereolabs)
The is usually created with ZED 360, However ZED 360 can only create these when the camera overlap. You’ll have to create your configuration file by yourself, or wait that we release a tool that allow the generation. If you know the positions of the cameras, it should be quite easy.