Base object detection model used in MULTI_CLASS_BOX_FAST for Jetson Nano


For now, we use the default OBJECT_DETECTION_MODEL::MULTI_CLASS_BOX_FAST in production on our jetson nano robots because we don’t have any additional requirements.
But for the next release, our team needs to detect an additional thing (a sandbox to be accurate). I guess we have to use a custom model but we would like to use the same base model so that we have the same performances but I don’t find this information in your documentation.
Can you give me the name of the base model you use ? Is it a Yolo model or something like this ?

Kind regards,



You can use a YOLO model yes, for an equivalent runtime you can target Yolo v5s at 416, other close models are v8s 320 or v6n 512.

For inference, you can use this sample with the ZED to run efficiently the model with the same 3D tracking workflow

1 Like

Thanks for your answer ! It works pretty well with the v8s 320 model when depth is greater than 0.4m. Otherwise, it works pretty bad and gives like 11meters predictions while I’m at 0.2m. That’s a behavior I don’t have when using your MULTI_CLASS_BOX_FAST model. Is there some magic here in your SDK that you don’t replicate when we use a custom model as suggested in the link you provide ?

Hi @adujardin,

Did you have time to look at my question please ?

Hi, sorry for the delay.

There are some additional filters to remove duplicates and outliers yes.

In that case, make sure it’s not out of range. The depth minimum can be set in the InitParameters, see API doc: InitParameters Class Reference | API Reference | Stereolabs
The depth computation tends to give high values when in a very close range, it has a hard time matching pixels with the right image with very high disparity. Make sure the depth range parameters and ZED model characteristics can fit the range you’re targeting.