Camera Fusion/Stitching Help

I have one zed x and two zed x one gs cameras, all of them are connected via gsml2 to my jetson orin with the sdk working. if anyone can guide me how to fuse the zed x and the two zed x one gs cameras to get a 180 fov. the zed360 software does not support calibration with the zed x one gs cameras. if anyone can provide instructions it would be helpful. we are aiming to fuse the three cameras to use on an autonomous vehicle project. i am comfortable with the setup in anyone of python ,c++ or ros2 .
thank you

Hi @EngineerAditya
Welcome to the Stereolabs community.

What type of “fusion” do you want to obtain?

  • 2D image stitching?
  • Fused 3D point cloud?
  • anything else?

2d image stitching to run object detection on, will be using lidar for depth and distance data.

You must perform a full extrinsic calibration and use external computer vision libraries.
The ZED SDK does not provide this feature, but you can search the internet to find many different solutions that can fulfill your requirements. “Image Stitching” is the right keyword to use.

So i should use zed samples to understand how the cameras work and then implement a custom script to do the image stitching right?

I recommend you begin with the ZED X and ZED X One documentation.
Then you explore the examples and tutorials available on GitHub:

1 Like