Spatial distortion using NEURAL mode

In the attached video clip here, there are several issues that seem to stem from the NEURAL depth algorithm.

  1. Planar targets appear very non-planar
  2. Small targets get extruded to much greater depth at their edges, producing a “comet tail” effect
  3. Red targets appear at significant offset, e.g. something at 1.2m from the camera appears at 2.2m

I’ve tried to capture some of these in this screenshot, but it’s more apparent in the video I’ve linked above.

Is there any way to mitigate these effects? In my use case, I don’t need extreme speed, but I do need reliable depth accuracy. I fabricated a 2m-tall calibration target to spatially match the ZED 2i’s output to another sensor, and with such significant distortion, I cannot register the sensors together.

Please let me know if you need more information. Thanks!

Hi @underride
Please record an SVO2 with ZED Explorer in the same conditions and share it with us to verify the results as if we were using your same camera.

I also recommend that you tune the “Confidence Threshold” value to remove depth values with low confidence

Thanks @Myzhar. I’ve uploaded a few things into this folder (please let me know if you can’t access it):

  • HTML files which are plotly GUIs of the data
  • SVO2 files four four collections
  • Still the original video from two weeks ago.

I notice that the file ending in _30, it shows the large depth displacement of the red target – it appears a meter behind where it actually is, which is along a helical target arrangement.