User Perspektive Magic Lense Display

Hello! I ran into a weird tracking Problem using the body Tracking Module of the zed SDK. First of all: The Idea is to track a persons face in front of a monitor, and use that tracking data to control a camera in unity. The camera is using a asymetric frustrum script to render the perspective correct view of the person looking " trough" the display. The main Problem ist precision. Weirdly sometimes the skeleton renders higher or lower than it should. so i got a random offset on the skeleton. and sometimes the perspective is matching perfectly, and sometimes i got a big offset from like 10 cm in height. The camera detects itself everytime on the same position and rotation. so theres no problem in the self positioning. but the “projected” sdk skeleton sometimes has his feet under the ground. For me there are 2 possible reasons. First: Theres a weird problem in the sdk. or second: My way of getting the skeleton data is wrong. Im using this code to acces the skeleton data:

using sl;
using System.Collections.Generic;
using UnityEngine;

public class eyeBasedCameraTracking : MonoBehaviour
{
    public ZEDManager zedManager;
    public Camera mainCamera;
    public GameObject Zed2i;
    //public Camera frustrumCamera;
    public GameObject tester;
    public float threshold;
    public float smoothFactor = 0.1f; // Einstellbarer Glättungsfaktor im Inspector
    public float positionThreshold = 0.002f; // Bewegungsschwelle (kleine Werte werden ignoriert)
    private Vector3 smoothedPosition; // Zwischenspeicher für geglättete Position

    void Start()
    {
        zedManager.OnBodyTracking += UpdateHeadPosition;
    }

    void Destroy()
    {
        zedManager.OnBodyTracking -= UpdateHeadPosition;
    }

    void UpdateHeadPosition(BodyTrackingFrame dframe)
    {
        List<DetectedBody> newbodies = dframe.GetFilteredObjectList(true, false, false);

        foreach (DetectedBody dbody in newbodies)
        {
            UpdateAvatarControl(dbody.rawBodyData);
        }
    }

    private void UpdateAvatarControl(BodyData data)
    {
        Vector3[] worldJointsPos = new Vector3[38];
        Vector3 zedPosition = Zed2i.transform.position;

        for (int i = 0; i < worldJointsPos.Length; i++)
        {
            worldJointsPos[i] = zedManager.GetZedRootTransform().TransformPoint(data.keypoint[i]);
        }

        Vector3 rawHeadPosition = worldJointsPos[5] + new Vector3(-0.08f, 0.045f, 0.03f); // Offset-Korrektur perfekt zwischen augen edit: bissel nach rechts um offset auszugleichen
        float distance = Vector3.Distance(smoothedPosition, rawHeadPosition);

        if (distance > positionThreshold)
        {
            if (smoothFactor == 0)
            {
                smoothedPosition = rawHeadPosition; // Direkt übernehmen, aber nur wenn Bewegung größer als Threshold ist
            }
            else
            {
                smoothedPosition = Vector3.Lerp(smoothedPosition, rawHeadPosition, smoothFactor);
            }
        }

        tester.transform.position = smoothedPosition;
        mainCamera.transform.localPosition = smoothedPosition;
        Debug.Log($"Threshold: {positionThreshold}, Bewegung: {distance}, Kopf Position: {smoothedPosition}");
    }
}

i know there should be a more easy way. but im not able to program that myself using the standard body tracking showcase a.e… my question would be if this offset is normal. maybe the zed 2i is not able to get the head position in such a precise way. Interesting thing is, that this offset problem only occurs after a restart of the program in unity. so sometimes the offset is strongly visible. and then i restart and everything works perfectly fine. oh and when i use to not static positional tracking, the camera detects itself half a meter lower than it realy is.
Maybe someone is able to help me in my confusion. Have a nice Day!

Update: I got it. Still one issues remains. If i get the code correctly i only get the raw body data. how do i acces the transformed body data? with transforms like auto offset, or foot lock applied.

This is applied directly to the 3d avatar by the ZEDSkeletonAnimator script.

Hi BenjaminV, thanks for your answer.
How would i get the 3d world position of the skeleton joints from this avatar? lets say i want a specific joint (eye position a.e). Keep in mind i need the transformed skeleton data, with auto offset etc. from the body tracking manager applied.

You can get the joint transforms with this method : https://docs.unity3d.com/ScriptReference/Animator.GetBoneTransform.html

Best,

Stereolabs Support

I was able to get it to work by using tags in unity. I found no other workaround for this. The problem is I get a null error when using the leftEye Bone:

using sl;
using UnityEngine;

public class HeadTracker : MonoBehaviour
{

    public Camera mainCamera;
    Transform eyePosition;
    Animator animator;


    // Update is called once per frame
    void Update()
    {
        var go = GameObject.FindGameObjectWithTag("Unity_Avatar");
        animator = go.GetComponent<Animator>();
        eyePosition = animator.GetBoneTransform(HumanBodyBones.LeftEye);
        Debug.Log(eyePosition?.position);
        mainCamera.transform.position = eyePosition != null ? eyePosition.position : mainCamera.transform.position;
    }

}

You said i need to use the joints. So i guess theres no left Eye bone here. But what am I missing here? Where in the code exactly are those Joints? They are listed in the skeleton handler script. but like i said. Where do i need to call them? Which Syntax do i need to use? And is the auto offset already applied on theses joints? Greetings Moritz

You are using HumanBodyBones.LastBone, which is not a bone.

Stereolabs Support

i corrected this typing error 1 minute after i uploaded this. I´m using LeftEye. Greetings, Moritz

You are getting the transform of each joint of the rig used to animate your avatar.
I think that if the joint is not part of the rig, you can not get its transform like that.

Yes but the left eye is definitely a joint. but its not a bone. so my question is still the same. how can I get the position of the left eye on the skeleton which has auto offset and other body tracking manager settings applied?

With this method, you are getting the actual position of the joint of the avatar in the scene, so every transform, offset or modification to the raw data is already applied.

Stereolabs Support