I want to use the ZED ROS 2 wrapper’s object detection capabilities with a custom detector. Following the docs I was creating an ONNX file from a standard YOLO 8 (yolov8n.pt) model.
Using the exported ONNX I could successfully run the ZED SDK samples (without ROS). However, if I run the ROS wrapper (even on the same SVO file) and try to visualize the results (using the zed_display_rviz2 package) no objects are detected at all.
Hi @birneamstiel
we are working on an improvement of the ZED ROS2 Wrapper concerning the Custom Detection support, but it should normally work as it is now.
Can you please test your model with the ZED SDK native code and let me know if it works with it?
thanks for the quick reply. I have already tested my model with the tensorrt_yolov5-v6-v8_onnx_internal example you linked to (and with the same SVO file I used with the ROS2 wrapper) and it works fine.
Could you maybe list the steps you took to get a working model ONNX file along with a working list of object_detection parameters for the wrapper?
Yes I’ve set the right path, the wrapper does also find the model and optimizes successfully. However, no objects are detected…
Thanks for the hint regarding the input size. I’ve now created another ONNX file using a dynamic input size (yolo export model=yolov8n.pt format=onnx dynamic=True) and tried setting the custom_onnx_input_size parameter to 320 and 640, both did not work. The tensorrt_yolov5-v6-v8_onnx_internal sample works also with this ONNX file though.
Are you aware of any other ways of debugging the ROS wrapper to get closer to the root of the problem?