What should I do for more accurate Depth Sensing?

I belong to an ecology research group and am interested in 3d object tracking insects for behavioral experiments.
What can be adjusted to solve the problem of depth sensing of the bounding box?


(upper left) left scene of the experiment taken with zed2i.
(upper right) depth of the point cloud
(lower) the result of depth sensing of the bounding box

The experiment is conducted in a dark room and there is lighting in the back.
The relative position and angle of the camera, acrylic chamber, and lighting are fixed for each experiment.
The acrylic chamber has a horizontal width almost parallel to the left camera. The standard is a rectangular parallelepiped of 20Ă—30Ă—40 cm.
Our insects are 10 in an acrylic chamber for each experiment and recorded for two hours.
In this way, 3d object tracking is performed in the image taken to use sequential positional information for analysis.

We use darknet_zed.py in zed_python_samples of zed-yolo to obtain x,y, and z information on the insect’s boudning box.
It was confirmed that the bounding box could be well created.
However, a problem occurred in the process of sensing z (depth) using the bounding box and depth map.

If you look at the point cloud image, the depth of the right side and the left side is very different. (There is also a problem that the depth difference of the same position is large for each image.)
Therefore, the actual depth of the binding box is also smaller on the left than on the right, and the location and depth of the insect do not match on the right.

Solving this problem is very important for our research.
What filming environment & analysis method changes or programming can solve this problem?
We are thinking of methods such as using lighting without flicking, using acrylic cage without reflection, and adjusting the position of the light source.

The main options are as follows.

  1. Filming Options:
    init.camera_resolution = sl.RESOLUTION.HD2K
    init.depth_mode = sl.DEPTH_MODE.NEURAL
    recording_param = sl.RecordingParameters(path_output, sl.SVO_COMPRESSION_MODE.H264)

  2. Analysis Options

init.coordinate_units = sl.UNIT.METER
runtime.sensing_mode = sl.SENSING_MODE.FILL
cam.retrieve_image(mat, sl.VIEW.LEFT)
cam.retrieve_measure(point_cloud_mat, sl.MEASURE.XYZRGBA)

Thank you very much for reading the long article.

Hi @taein
welcome to the Stereolabs community.

The top left image shows evident image distortions that can lead to a poor depth estimation result.

  • Is there a glass or other kind of protection in front of the camera?
  • What kind of artificial illumination are you using?
  • Is it possible to get both left and right images obtained with the same grab call to compare them?

I’m glad and grateful to be able to diagnose the problem in the picture

  1. Actually I even didn’t know the necessary. What kinds of protection material do you recommend?

  2. In the 50x70 light pannel box, One 50W 13x23 normal LED bulb in.

    Model: LEDELCRD13550N-DHE
    Rating: 220V, 60Hz, 50W
    rated luminous flux: 5000lm
    luminous flux maintenance: 90%
    light source color: daylight

(Is it possible to use the “coupang” abroad?: https://www.coupang.com/vp/products/199923815?itemId=580129062&vendorItemId=4519504272&q=지속광+조명+50w&itemsCount=36&searchId=572f6cba2bba486e9548fc284ec7fa4e&rank=3&isAddedCart=)

And I also find that changing into the flickness bulb makes results better!
(however still the results have some mismatch in depth. ex: right side insects)

I could test with opaque acrylic chamber within 3 hours.

I suggest you put the camera close to the protection glass to remove reflections and move it near to the target in order to use all the field of view of the camera.

1 Like

Thank you for your answer. I will try with some additional changes.