ZED360 Fusion Problem (Calibration)

With the ZED360 app, I cannot properly calibrate the data I receive from 2 cameras. Skeletons perceived with skeleton tracking, but do not fit together.
What could be the reason for this?
How can I find the algorithm and code running behind the calibration process?

Hi,

I’m sorry but we don’t provide the source code of the calibration tool.

To perform a calibration with ZED360, you need to first add all the cameras you want to calibrate, select the calibration mode “skeleton” and click on “Start Calibration”.

Then, you need to walk slowly in your room, making sure you cover all the area seen by the cameras. Then, stop the calibration and save the generated calibration file.

If you feel the calibration has not performed well, you can try to take a bit more time during the calibration process.

Best,
Benjamin Vallon

Stereolabs Support

1 Like

Thank you for your answer.

When will it be possible if a change in the background options (enable_body_fitting, enable_tracking, camera_resolution, camera_fps etc.)?

What resolution does the camera work in ZED360? I may need to reduce the resolution after a certain amount of camera.

Is the rotation and translation matrix resulting from the calibration for the skeletion points detected, or for the camera position and orientation?

I have not yet received the result I wanted because the skeletons did not fit together completely in the ZED360.