Skeleton get stuck with particular gesture (swiping)

Hi,

We have been working on a implementation of Stereolabs C# API into our production tool VVVV gamma.

It works so far quite well, i do get for example pointcloud, skeleton and other tracking data.

Now in our particular problem domain we want to detect swipe gesture, so the hand is moving in front of the body.

Suprisingly I found that with this gesture the ZED camera does not perform so well. The hand joints get stuck in front of the face or the upper body of the person performing the gesture.
This happens with some people more then with others, but overall it seems to be a problem for the thing we want to do with it.

With the C# api i have controll over prediction treshold and tracking type, that is the only thing i can set right now ( at least the only thing we exposed in our tool). playing with these parameters do not improve the situation…

Do you have an idea what could be changed on the C# api side to improve the skeleton tracking?

Thank you.

Hi,

I would recommend using the HUMAN_BODY_ACCURATE detection model, which is the most accurate model available in the SDK.
You can also try to change the Depth mode to a more accurate one (‘ULTRA’ for example).

Indeed, when you are doing the gesture, your arm is pointing toward the camera, there all the keypoints of the arm are occluded by the hand which can lead to inaccurate data.
I guess it’s what is happening here.

It would also be very helpful if you could send us a SVO file (Stereolabs video format) where this issue happens. You can send it to support@stereolabs.com.
We will investigate on our side to potentially improve it in future SDK versions.

Thanks for your feedbacks.

Best regards,
Benjamin Vallon

Stereolabs Support