How to improve depth accuracy with ZED 2i in nighttime

Hello, ZED team. I have a question about using the ZED 2i in a nighttime setting (not completely dark, as the scene is lit, but with the light source perpendicular to the camera), and when there’s a significant lack of texture in the background (such as an open yard or sky). How can I enhance the precision of the depth measurements in these conditions?

What kind of environment are you trying to scan ? Can you upload a svo so that I can have a better idea ?

I’m sorry, but due to the large size of the SVO file, it’s difficult to upload. Instead, I’ll upload a photo to better illustrate my experiment scenario. Initially, I used the ZED camera to study the behavior of insects flying above a bright light and purchased a polarized version of the ZED 2i. However, I noticed the depth measurements seem inaccurate. How can I enhance the depth accuracy in this type of situation?

Unfortunately, there is not much you can do. Your image is very dark, human eyes would be in trouble too.

Plus, you want to detect flying insects 'which are very small) very a polarized lens (which makes you lose half the light) you are adding too many constraints.

In my experimental scene, the light is actually very bright, I want to observe the flying behavior of flying insects above a 1000w bulb, at the beginning of the purchase of the camera, in order to resist the strong light, I just bought a polarized version of the zed2i, in the target detection link, there is not much problem at the moment, because I do not need to detect very small flying insects, but the depth effect is not very good, so much so that if I want to plot the flight path of flying insects, the trajectory will become very strange, I currently plan to add some markers in the background to enhance the graininess, such as a vertical rod with a reflective sticker. This scene does present some challenges, and I would like to ask you if I can adjust some parameters to improve the reliability of the depth information?

Yes. You can play with the confidence and texture confidence for example. Also, try to disable the satured area removal, maybe that can help ?

Thank you, I’ll make some attempts at this.

Hi, regarding the previous question, I have another idea. Is it possible to improve the accuracy of the measured depth by fusing data from two ZED2i devices? After the fusion, can I still use the detection and tracking code from the previous example?


In the current release, the fusion only works with body tracking and positional tracking. Not point cloud. That will come in a future release.