Training own detector that is compatible with ZED's Unity plugin

I managed to get teh custom object detection inside ZED’s Unity Plugin to work. Now I can run the provided yolov7-tiny and yolov4-tiny models inside Unity.

However, I’d like to train my own models and deploy them there. For that, I’d like to know in which format I need to provide the neural network/the weights. I see that the weight files have the .weight ending, but idk where they are coming from.

Do I need to follow, e.g. this zed-examples/object detection/custom detector/cpp/tensorrt_yolov5_v6.0 at master · stereolabs/zed-examples · GitHub, to convert my PyTorch model to a TensorRT engine, or is this different?

Thanks!

Hi,

That’s good to know.

I recommend you to take a look at our documentation about the Custom detection feature, there is a section about training custom models (https://www.stereolabs.com/docs/object-detection/custom-od/#object-detection-steps).

You can indeed also look at our different samples to see how it has been done.

Best,
Benjamin Vallon

Stereolabs Support

Will do, thank you! I’ll let you know if I run into issues.

The OpenCV for Unity plugin has several examples, which are adaptable.
However, as openCV for Unity does not utilize the GPU it is of not much use. Will open a separate thread for that.