Distance accuracy


I am trying to use ZED2i to make measurement of objects si I need accurate depth measurement.
I try to measure leaves distance but when I compare distance with L515 I see on some points large distance error

Here an example :

As you can see , on the left part I have 50cm difference. on the right 20cm difference but the measure of the ground distance is OK.

I used the parameters : FILL and NEURAL

How could I improve the accuracy ?

Hi @bpasserat
the problem is the FILL parameter.
It must not be used to measure depth because it introduces smooth estimated values to “fill the holes”, so in the zone where a depth hole was present, you can retrieve not valid depth values.
The FILL model has been introduced for a better AR/VR experience, not for real depth values estimation.

What about if you disable the FILL mode?

Do you have hints about how the “fill the hole” works ?
I checked the whole area on the left and i did not find any points with the correct value.
Does it mean if I use STANDARD mode I will only have NaN ?

I’m sorry, but FILL MODE algorithm is closed information.

No, this is not correct.

@Myzhar ,

Thanks for your answers, I got time for extra tests with neural and the 2 modes : FILL and STANDARD.

I was surprised to see no (very little) difference between the two modes with the NEURAL.

I double checked my code by testing QUALITY with both mode (STANDARD and FILL) that makes a big difference.

I do not see such difference with NEURAL.

Do you have extra test to do ? How could I go further about the distance accuracy ?

Up, do you have extra information about standard and fill mode with NEURAL ?

Hi @bpasserat
the FILL mode has been created to add missing depth information when using “standard” stereo vision algorithms.
NEURAL depth mode is different, the depth map is created by using AI and the resulting depth map is more dense, that’s why the FILL mode does not add too much new information.
Like I told you before, FILL mode must not be used to extract depth information, it’s an algorithm to generate better “visual” data to be used in AR and VR.
When you get a depth map with small gaps around object contours, that’s expected and it’s caused by the parallax.

To be more accurate my question was why when I did the test between FILL mode and STANDARD mode with NEURAL I did not see any differences ?