Hi,
Looking at the examples in the ZED X page I found a possible configuration called “Duo Reference Design for Autonomous Tracktor”. I would like to replicate that configuration on my robot.
I thought the best way to implement it in code was to use the FUSION module, but in the calibration process its required a minimum overlap of the 2 cameras.
So, how can that be achieved? Do I need to implement manually in the code the fusion from the 2 cameras or there is another way to use the FUSION module of the ZEDSDK?
The ZED360 tool is designed to calibrate an outside-in multicamera system, where multiple cameras are placed statically in the environment to track moving objects from multiple points of view.
We do not currently have automatic calibration systems that allow you to obtain precise inside-out extrinsic calibration parameters, but we are working on this subject, and we expect to release solutions soon.
I’m not sure which of the 2 following configuration would be better for positional tracking and obstacle avoidance!
The first idea was to use two zed looking forward left and forward right with an overlap of 10 deg and using the FUSION model.
The second idea was to have one forward and one backward.
If I understand correctly, in both cases I cannot use the ZED360 tool because the camera is moving.
So I need to choose based on other criterias, in either case i need to fuse the data my self or I can use the FUSION module without the ZED360 calibration in some way?