Hi!
We use ZED cameras to measure different objects (like cardboard boxes). After ~4 different locations we faced strange behavior: 2 new viewpoints give us measurement error around ±7-10 cm only on the Z-axis in World Coordinate system (measurement of X/Y axes are quite good). We definitely know that’s not a camera issue (we tried different ones) and it is not «sensors» (we switched to ultra - and have consistent ~5cm errors across different cameras). Other 2 cameras in the same location give us accurate estimations with 1-2 cm errors.
So we think we have special point of view (or objects, or background, whatever), which is problem for your neural networks somehow
I can provide more information (*.svo files & Jupyter notebook with illustrations for this problem) - give me an email!
If you have also general advice - would be great to hear it! Thanks
Hi @pixml
What ZED SDK version are you using with your tests?
What specific Neural Depth mode?
Do you have pictures to show the test conditions and the “views” that cause these issues?
Have you tested if the same problem persists with the new, improved AI depth modes available with the latest ZED SDK v5.3?
We use ZED SDK 5.0.7 & NEURAL depth mode, as far as I can see from the changelog ( https://github.com/stereolabs/zed-sdk/releases ) NEURAL depth mode has’t changed since 5.0.7
And yes, i can share pictures and *.svo and code samples if you could provide an email address