Smart Mirror in Unity

I’m working on a prototype “smart mirror” setup corresponding to the Unity screenshot shown below. A ZED 2i is placed above a small box that has a one-way security mirror set into a frame (mirror not shown here so we can see inside the box). The ZED 2i faces forward and downwards in order to track the person looking into the mirror frame. Inside the box is a monitor set roughly as deep behind the mirror as the viewer in front.

On this monitor I’m going to render content in such a way that it appears co-located with the viewer’s mirror image.

The overall approach is:

  1. Set up Unity project as outlined in Body Tracking - Stereolabs.
  2. Create an additional virtual camera for the monitor content.
  3. When a tracked body appears, determine its left-eye position in world space coordinates.
  4. Position the virtual camera on the opposite side of the mirror plane at this eye position, looking back towards the viewer.
  5. Adjust the virtual camera’s frustum so that it intersects with a mirrored version of the monitor. Note that this is an asymmetric view frustum. In Unity it can be achieved by adjusting the lens shift parameters of a physical camera.
  6. Repeat 4-5 as the tracked body’s eye-point changes.

At the moment I’m stuck on step 3. Here’s a code snippet from a script attached to the virtual camera. It gets runs once per frame.

BodyTrackingFrame bodyTrackingFrame = zedManager.GetBodyTrackingFrame;
DetectedBody firstDetectedBody = bodyTrackingFrame.detectedBodies[0];
BodyData bodyData = firstDetectedBody.rawBodyData;
Vector3 eyePosition = bodyData.keypoint[((int)BODY_38_PARTS.LEFT_EYE)];
Debug.Log(eyePosition);
transform.position = eyePosition;

Ignore for the moment that there is no error/null checking going on here. How do I transform a keypoint from its local coordinate system to the world coordinate system?

Any help is much appreciated.

Here’s a Unity mockup explaining the virtual camera approach described above. The viewer in front of the mirror frame sees a virtual image of themselves behind the mirror. We position our virtual camera at the eye of this virtual person, facing it back towards the mirror. Next we mirror the monitor in the cabinet about the plane of the mirror. This is the green quad in front of the mirror frame. Finally, we adjust the frustum of the virtual camera so it just touches the borders of the green quad. If the resulting virtual camera view is displayed on the monitor, I believe the yellow stick figure viewer will perceive it as a depth-correct overlay.

Hi,

The keypoints data are in the ZED Camera reference frame.
To get the world position of the left eye, you need to do the following :

transform.position = zedManager.GetZedRootTansform().TransformPoint(eyePosition)

With zedManager, a reference to the ZEDManager instance that handles your Zed Camera, present on the ZED_Rig_Mono prefab.

1 Like

Thanks Benjamin, that worked nicely. (You’re probably already aware that the method name has a typo, “Tansform” instead of “Transform”.)

Oh wait you are right, we’ll fix for the next version.

Thanks for the report.

I thought I’d post a quick screenshot from the demo I’ve developed. It works, implementing the process described above.

On every frame, it uses the body-tracked data to get the viewer’s eye-point in real-world space, and then renders content for display on the monitor behind the partially-silvered mirror. This content overlaps the viewer’s own real mirror image and feels (roughly) depth correct.

2 Likes