Any way to use Fusion API with Object detection module?

Hello to everyone!

I am working on an immersive room, with projection on the floor, a 7 by 7 meter square, with at least up to 20 visitors at the same time.

I’m using Unreal Engine to generate what is projected on the floor.

And I have foor Zed2i, each plugged by usb to Nvidia Jetson Nano, hanging from the ceiling, pointed towards the floor, that can retrieve data by network, through a router.

Each person that enters the room should have his x/y position on the floor tracked and retrieved to the computer running Unreal Engine so I can project stuff where someone is.

For this, I was thinking first of using the fusion API and the Multi Camera Body tracking sample that you provide.

But I fear that using a live link solution, that requires the use of skeletons, will be an expensive CPU load solution for me as I only need the x/y position on the floor of each visitor. And the computer uses at the same time a big part of the CPU/GPU resource to run the Unreal Engine stuff I am working on.

So I was thinking about a similar solution, but using Object detection module.

Is there a way to use object detection instead of body detection but keeping a Fusion setup?

Because the calibration of the fusion360 module seems to be really accurate and could be my best friend when time comes to link all together.

Hope that someone will be able to give me some advice!

Hi @AntoineBourgouin,

Welcome to the Stereolabs forums :wave:

Your approach appears correct to me, however the object detection module is currently not available in the Fusion module, but it is definitely in our roadmap for a future SDK version.

In the meantime, I can suggest trying out different of our body tracking algorithms, and retrieving the position of the tracked person, with a lower accuracy model as you don’t require precise keypoint data.

Thank you for your feedback.

I will try this way.