In the past, I was successfully using YOLO (Ultralytics) for custom object detection with my ZED cameras. I recently switched to using the ZED SDK’s built-in Custom Object Detection pipeline, trying to integrate my YOLO model via the custom_onnx_file option.
However, when using either the .onnx or .engine file exported from YOLO, the object is not detected — nothing appears, even though the model works fine outside the SDK.
After troubleshooting, ChatGPT suggested that the issue might be due to the ONNX output shape. My current model outputs:
(1, 5, N)
But it seems that the ZED SDK expects something like:
(1, N, 5)
Can someone confirm what the expected ONNX output shape is for custom models in the ZED SDK?
How to make my yolo custom object detection model with that shape?
Any advice or sample working ONNX output spec would be greatly appreciated.
Hi,
For a typical Ultralytics YOLO model, it’s expecting a shape like [1,84,7581].
It should be an ONNX file. There are instructions per model to export the correct format here: How to Use Export YOLO ONNX model to use the ZED Custom Object Detection - Stereolabs
YOLOv6 is indeed like [1,8400,85] (all these examples are for the default COCO, 80 classes) and is also handled
Could you post a screenshot of, like netron showing the shapes? For instance, here’s a compatible yolov8n with fixed size:
Hello, I’m using yolo12n detection fine tuned on a single class this means is a binary model. I’m exporting my model with yolo into an onnx file but when actually using as an onnx file just with ask it is not detecting anything. Must say that my cameras are subscribed to fusion and within python the wrapper is not implemented to get detection from fusion and I’m using the method grab to receive frames or retreive_image.