Custom detector sample on rotated images

Hi,
I am working on the object detection sample.
I have rotated the camera by 90 degrees (vertical camera) but my model only detects on horizontal images. How should I modify the sample to handle this? I tried alone but I can’t figure out how to rotate the bounding boxes.

Furthermore I noticed that the inference time is way higher than the expected values. With the small yolov8 detection model I get 300ms of inference time which is huge. On the ultralytics page I found a table stating that on jetson orin the small model can reach 18 ms inference time (NVIDIA Jetson - Ultralytics YOLO Docs)
Any suggestion on how to improve inference time (except from using a smaller model) and why this is happening?.
Thanks in advance,

Hi,

With a quick stackoverflow search here is a way to rotate a bounding box 90 degrees in an image: python - How to rotate a rectangle/bounding box together with an image - Stack Overflow

To have a fair comparison with the benchmark provided by Yolo, I suggest that you run their benchmark on your system with the same model to see if they have better performance.
If you have not already, you can follow the suggestions in the “Best Practices when using NVIDIA Jetson” from the post you’ve shared, as these maximize performance.
Please make sure that the image_size parameters match the benchmark when testing the sample, this can have a big impact on performance.

Hi ,
Thanks for your kind answer.

  1. Rotation and BBOX
    The problem is not related to rotate a bounding box, I already did that. The problem is that for some reason I can’t visualize (and neither rotate) the 3D bounding boxes and the 2D box randomly appear sometimes. If I print (with cv2.rectangle) the 2Dbbox that the model predict on the image I can see them right (which means I am handling correctly the rotation). I’ll leave here the full code (which is simply the sample code modified to handle if the image is rotated).

detector_2.py (8.5 KB)

  1. Performances
    I have the same img size as the benchmark and set all as them. I think the problem is related to the fact that the Zed Box is not using the gpu. If I do from a python terminal toch.cuda.is_available() I get false.
    In another post you suggest to follow this guide Installing PyTorch for Jetson Platform - NVIDIA Docs but I was wondering, doesn’t the pytorch with cuda come with the installation of the zed sdk/jetpack? I fear to mess up my environment. Any suggestion on how to proceed?

Hi @Prospecto,

  1. I understand better now, sorry. We have seen some issues with using the camera vertically in the opengl display, as the common use case is to have the camera horizontal. We are looking into this to have the bounding boxes correctly set.

  2. pytorch is not a dependency of the ZED SDK, therefore does not come packed with the ZED SDK installer. I do not think this is the case for Jetpack as well, that’s why we recommended the post you’ve shared. The section Installing Multiple Pytorch Versions describes how to install pytorch with a virtual environment, this is the best practice in python in order to have controlled environments and avoid installing python modules system wide.