Calculating Orientation through the Custom Object Detection API

Hi there!

I’m building a vision pipeline that will use a YOLOv8 bounding box to obtain the 3d pose of an object. With the ZED SDK, this seems pretty straightforward, at least to obtain the object’s coordinates and velocity.

However, I’m also interested in receiving the orientation of the detected object - what would be recommended to achieve this, perhaps given a reference image of what a predefined quaternion orientation would look like?

I’m using a Jetson Orin Nano + ZED 2.