I’m using the custom object detector with my ZED 2 (SDK 3.7.0)
Initially, I used the sample code provided here. In this code to the OpenCV Viewer only raw results from Yolo Detection are passed for visual representation.
I tried to pass sl::Objects instead of raw Yolo res to the Viewer and more objects than detected started to be displayed.
After some tests, I noticed that the problem was visibile only if enable_tracking was true.
Looking into Camera.hpp I found that if an object is not visibile or detected (when tracking is enabled) pass to a SEARCHING state where its trajectory is estimated.
So, in order to visualize only detected objects, I tried to display only objects where:
Hi @br0tda
yes, to avoid detection jittering, we predict a detection during a short time period before passing it to SEARCHING state. It is the current behavior of our tracking.
Maybe in future versions of the ZED SDK we add a parameter to disable this behavior, with many other tuning parameters.
I found another “little” problem in tracking behavior: the prediction has more importance than detection.
Eg. I have a network trained to detect 4 types of cones. If I move the camera covering and showing again a cone (Cone1) while other types of cones are detected, when the Cone1 appears again there are situation in which the detected_class is wrong and similar to the one of a cone near Cone1, as you can see in the picture where blue_cone is detected as orange one.
That’s because when moving the camera and the blue disappear, the orange cone is detected in the previous camera_frame position of the blue. When blue appears again, it’s tagged as orange_cone instead of blue.
That’s not a network fault because without tracking it works without these kind of problems.