We have been working on a implementation of Stereolabs C# API into our production tool VVVV gamma.
It works so far quite well, i do get for example pointcloud, skeleton and other tracking data.
Now in our particular problem domain we want to detect swipe gesture, so the hand is moving in front of the body.
Suprisingly I found that with this gesture the ZED camera does not perform so well. The hand joints get stuck in front of the face or the upper body of the person performing the gesture.
This happens with some people more then with others, but overall it seems to be a problem for the thing we want to do with it.
With the C# api i have controll over prediction treshold and tracking type, that is the only thing i can set right now ( at least the only thing we exposed in our tool). playing with these parameters do not improve the situation…
Do you have an idea what could be changed on the C# api side to improve the skeleton tracking?