Can anyone please tell me what is going on behind the scenes when the depth_stabilisation parameter is used? The only information that I can get is that is “fuses and filters the depth map over several frames”, but no information on how that happens, and where is is likely to improve or degrade performance.
Experimental results with varying depth_stabilisation levels between 0 and 100 showed no noticeable difference between the objects on which depth was being detected.
In the use case of a side-facing camera mounted to a moving vehicle, where there are likely to be few points of overlap between any two frames, is this mode introducing extra error, or eliminating it, and how can I test this for confirmation of either case?
in dynamic tasks, it’s recommended to not use a high value for
depth_stabilisation to not introduce blur effects.
The depth stabilization algorithm is used to reduce the depth noises in static measurement tasks.
For your application, I recommend tuning the confidence threshold runtime parameter to improve the quality of the depth map.
What camera model are you using?
We are using zed 2i, and applying a depth+confidence filter in post-processing rather than at time of image capture.
To clarify, it is your belief that using depth_stabilisation will make the depth estimation worse in my use case?
@multicore-manticore For your use case I advise you to use a low value for
depth_stabilisation, 1 or 2 is recommended.