Possible bug in keypoint2D [body tracking]

Hello Stereolabs team and community!

I’m developing an application in Unity using the Body Tracking module inside the Zed Manager, and I saw some strange behaviour with the 2D keypoints for the head tracking.

For this tests I used the BodyTrackingSingle scene inside the ZED examples folder.
If I run the example as it is, everything runs perfectly, but if I go to ZEDBodyTrackingManager->UpdateAvatarControl and I switch all the data.keypoint to data.keypoint2D, the head is not placed correctly, as you can see in this picture.

(In order to make it visible and to match the 3D skeleton, I had to change all the x and z scales for the bones and joints to 2, and I also had to change the Y scale to -1 in the parent gameobject)

Am I missing something, or is this a bug in the way the SDK retrieves the 2D data?

Software and Hardware used:

[Unity 2022.3.18f1] 
[Zed2i] 
[Camera firmware 1523, IMU Firmware 777] 
[Zed_Unity_Plugin_v4.0.7] 
[ZED SDK Version "4.0.8"] 
[CUDA Toolkit version: "V12.1.66]

Thanks a lot

Hey @R2R0, welcome to the forums.

Thanks for the report, I’ll show it to the team, but before that can you confirm the behavior is the same on the same frame? Here you show two different frames.

Hey @JPlou :slightly_smiling_face:

For the thread I grabbed two frames that were as close as possible, but grabbing the same one is a bit hard in Unity.
But I can confirm that the behaviour remains the same during the whole duration of the video, as well as in other videos I have recorded.

Thanks

Hi @R2R0

I confirmed that the data is projected to the 3D space at the end of the processing, so there’s no extra work on the 3D except the projection.

  • How do you project your 2D points into the scene?
  • Can I ask what you want to achieve? Maybe we can work something out if the current output does not fit your app.

Hello @JPlou

In the case I shared, I just got the scene BodyTrackingSingle inside the Zed Unity package and I changed the data.keypoint to data.keypoint2D inside ZEDBodyTrackingManager->UpdateAvatarControl.
Maybe you folks could try to repro it to check if the behaviour is expected?

Am I supposed to do something else to make that work?

What I try to achieve is to have a skeleton similar to the 3D SDK Skeleton in a 2D Canvas.

Hey @R2R0

Can I ask for the full code of the method you’re changing? I’ll gladly test it on my side.
How do you manage the depth?

Also, have you tried placing the keypoints2D on a canvas, in screen space?

Sure thing @JPlou !

private void UpdateAvatarControl(SkeletonHandler handler, sl.BodyData data)
	{
        Vector3[] worldJointsPos = new Vector3[handler.currentKeypointsCount]; 
        Quaternion[] normalizedLocalJointsRot = new Quaternion[handler.currentKeypointsCount];

        for (int i = 0; i < worldJointsPos.Length; i++)
        {
            worldJointsPos[i] = zedManager.GetZedRootTansform().TransformPoint(data.keypoint2D[i]);
            normalizedLocalJointsRot[i] = data.localOrientationPerJoint[i].normalized;
        }
        Quaternion worldGlobalRotation = zedManager.GetZedRootTansform().rotation * data.globalRootOrientation;

        if (data.localOrientationPerJoint.Length > 0 && data.keypoint2D.Length > 0 && data.keypointConfidence.Length > 0)
        {
            handler.SetConfidences(data.keypointConfidence);
            handler.SetControlWithJointPosition(
                worldJointsPos,
                normalizedLocalJointsRot, worldGlobalRotation,
                useAvatar, mirrorMode);
            if (enableFootLocking)
            {
                handler.CheckFootLockAnimator();
            }
        }
    }

I just did this change and I also enabled the DisplaySDKSkeleton in ZEDBodyTrackingManager to see how it bahave.

You may need to change the float width = 0.0125f; to float width = 2f; in the method UpdateSkeleton in the class ZedCustomSkeletonHandler in order to see it in the screen.

You will also see that the skeleton is upside down after these changes.

1 Like

@R2R0

Sorry for the delay.
I was tracking the difference, but I think the whole process is not good.
The keypoint2D array contains data in screen coordinates, basically pixel coordinates, with respect to the source resolution.

You can totally display them in a canvas, it’s an array of Vector2, so 2D coordinates. You would have to scale them to your canvas size, that’s the only calculation needed.

In regards to the discrepancy you get in the neck, I think it’s due to perspective. To reproduce it, you should just have to remove the depth coordinate altogether in the keypoint (not keypoint2D) array. Something like:

worldJointsPos[i] = zedManager.GetZedRootTransform().TransformPoint(new Vector3(data.keypoint[i].x, data.keypoint[i].y));

instead of

worldJointsPos[i] = zedManager.GetZedRootTransform().TransformPoint(data.keypoint[i]);

Sorry it took some time to track down, I misunderstood what you were trying to achieve.

Hey @JPlou!

You can totally display them in a canvas, it’s an array of Vector2, so 2D coordinates

Cool, thank you! It is what I was doing already :slightly_smiling_face:

In regards to the discrepancy you get in the neck, I think it’s due to perspective.

After some hours scratching my head, I checked once again the scenes I was using for testing, and as you said, the perspective was the culprit.

My mistake here was checking the skeletons in the scene, instead of checking how they looked in top of the video. As we can see here, the neck and head position is the same in the 3D and 2D space.

Oh, and I also found that is easier to pause in the same frame if you unmark Real-Time Mode inside the ZedManager and you go frame by frame in Unity, in case someone finds that information helpful!

My apologies for the confusion and thanks a lot for the help and the quick responses :man_bowing:

1 Like