Combining two svo files together to get fused data

Hello community

I would like to know if it’s possible to combine two svo2 record files from two zed x mini that are fixed in different angles and position on a one frame of moving vehicle and known position relatively to each other (transformation is possible) to one fused svo2 or fused information to get “better" information of relative pose and odometry tracking, depth estimation and spatial map/point cloud.

I believe the fusion360 is implemented for fixed camera for multiple camera that has overlapped fov which is not case for me (cameras not overlapped and it’s fixed on moving platform, but I know their position relatively to each other ).

Recording was done using c++ implemented to record two instances of camera on the latest sdk (5.1 Jetson version 6.2jetpack), but I believe it’s has the same timestamp from the same Jetson, probably possible to synchronize them somehow beforehand.

Does anyone done this before if so, I would appreciated for some hints and general direction with documentation reference.

I know there is the configuration file that I need to initialize, assuming that I already done this what would be next step?

Thanks

Hi @nyrjan
Fusion support for Positional Tracking will be introduced with future versions of the ZED SDK.

Currently, you need an external Kalman filter to fuse the odometry information from the two SVOs.

Hi @Myzhar,

Thanks for getting back to me.

I’m a little confused though a while back in another thread (link), I was told by @alassagne that fusing odometry from multiple ZEDs was possible using the Fusion API and geotracking sample, as long as I created the config file myself. But now it sounds like this isn’t supported yet and I’d need an external Kalman filter instead?

Just want to make sure I understand what changed, or if I misunderstood something back then.

Quick recap of what I’m working with: two ZED X Mini cameras fixed to a moving vehicle, pointing in different directions (so no overlapping FOV). I know exactly where they are relative to each other, so creating the config file isn’t a problem.

What I’m trying to figure out is with SDK 5.1, what can I actually do with this setup? Specifically:

  • Can I fuse odometry from both cameras to get better positioning?
  • Any way to combine depth data from both to get better accuracy?
  • Can I merge the point clouds into one?
  • Is building a combined spatial map possible?

And regarding all of the above: if these features are supported, should we generally expect the fusion to provide measurable accuracy gains, or does the benefit depend heavily on the specific configuration and motion constraints? Like in other words, would I explicitely benefit from the multi camera setup or it merely depends on the situation (assuming I am using the jetson orin nx16, which I know from experience it can handle 2 zed x min camera perfectly in full resolution 25fps stream to gstreamer and record svo2 on board). This would be really helpfull to understand for future integrations.

I do understand that some of this might require external tools like Kalman filter implementation to fuse data properly. But before I go down that road, I wanted to check if there’s anything already available either within the current SDK features or in theStereolabs GitHub repositories that handles this kind of multi-camera fusion for a moving platform setup, or as you mentioned it plans for future, do you have in mind specific/approxiamte dates for that what we can expect as users.

If there’s an existing sample or utility I might have missed, I’d love to know about it. And if not, no worries just want to make sure I’m not reinventing the wheel.

Thanks again for your help, really appreciate it.