Hello,
I’m working with the ZED 2i Stereo Camera and ZED SDK in Unity and using body tracking. My main goal is to attach a camera to the head of the tracked person to get a first-person POV for my crane operator simulation.
The body tracking generally works, but I’ve noticed a consistent issue:
-
When I look upward (e.g., tilt my head back to look at the ceiling or sky), the animated body/head in Unity does not match the full range of motion.
-
The head in the animation only tilts slightly upward, nowhere near as dramatically as my real head movement.
I’ve tried:
-
Using the provided ZEDManager
and ZEDBodyTrackingManager
scripts without modification.
-
Checking different rigs (e.g., looking at mixamorig:HeadTop_End
bone) to confirm that the animated head is not moving enough.
-
Ensuring positional tracking is enabled.
My questions are:
-
Is this a known limitation of the body tracking system (reduced head pitch range)?
-
Are there parameters in the SDK (e.g., detection model, body fitting, IMU fusion) that can improve head orientation tracking?
-
Is there a recommended way to get more accurate head tilt/pitch when looking up or down?
Any advice on improving head tracking fidelity (especially for upward looking) would be greatly appreciated.
Thanks!
Hi,
I think this is caused by the body fitting itself that prevents “impossible” movements from appearing, but it might be a bit too strict for specific situations such as this one, and therefore reduces realistic ranges of motion.
For now, it is not possible to tune the fitting parameters with the SDK Api.
The fitting algorithm is different for each body model (34 and 38), you can try to use the other one.
From my experience, the body 38’s fitting lets more freedom, so it might be worth trying.
Best,
Thank you for responding! I had been trying body 38, but I tried body 34 this morning and oddly enough I got a slightly better degree of motion and stability. What I really learned though, is that my degree of eye movement is much further then my head movement (of course, right!?!). I’ll test with a second monitor propped over head to increase my FOV and see if the that satisfies what is needed for the operator.
A little more about my use cases:
- This camera is being used to replace the Meta Quest 3 VR Headset in our existing product. I want operators to have a head device free experience.
- I would like to also use the camera to detect multiple people so I can include signalmen. However, I’m seeing the limitation on hand gesture detection. Will this ever be something the camera can capture?
- My team is also developing a firefighter simulator. My goal would be to use this camera to detect firefighters and different pieces of hardware they are holding as they respond to an active scene. Do you think this camera will be able to support such a thing?
At this moment, I have to determine whether I should return the camera before my 30 days are up. I love the ideas behind the camera, but I’m not sure if it’s quite ready to replace our Quest 3 headset.
Yeah, for this specific motion, the body 34 model may behave better. It’s not the first thing I test, so I might be wrong in my last message..
There is no plan to change the current body tracking model in the short term. We plan to work on a full rework of this module (that might include hand detection), but this will require a lot of work from our side and is not planned for anytime soon, unfortunately.
The body tracking model can not detect firefighters specifically. However, you may be able to pair it with a custom detector to track the objects they are carrying.
Stereolabs Support