Multi-Camera Fusion API gives worse results than a single camera in body tracking. Is this normal?
I created a json file with the ZED360 and when I worked on it, I found that its performance was worse. There are problems with complete perception and various shaking in the body. There was an article that it was corrected with 4.0.1; But when I update, the same problems still persist.
This type of issue most of the time comes from an inaccurate calibration. We are currently fixing some issues that were decreasing the quality of the calibration computed by ZED360.
Those improvements be available in the v4.02 of the ZED SDK, by the end of the week/early next week.
As a result of the calibration, I saw that there was rotation and translation data in the json file. In ZED360, if the skeleton points fit well at any point in the space, I think this should be enough for me. Is this a wrong approach?
We are looking forward to the new version.