Kind of a vague open ended question, but we’re working on an interactive mirror exhibit. The problem we’re running into is that the LED wall is 9 feet tall and we can’t place the camera in front of the wall to get it at eye/chest height. We’d either need to put it at the top or bottom and angle it awkwardly, or remove an LED panel in the center for the camera and have a bit of a gaping hole in the wall.
Has anyone else done something similar and have any clever workarounds? Is there anything within the ZED SDK we could use to manipulate the image, maybe even with a second camera?
Thanks!
Hi,
Do you have a rough idea of from what distance the person will be standing from the screen? It might give us an idea of the angle of the camera.
Stereolabs Support
It’ll be in on a wall in a hall that’s 27’ wide. I think ideally we just pickup the 13’ or so closest to the wall if possible. Also another complicating factor is that we’re trying to remove the background behind everyone. The hallway will be empty which helps but having extreme angles brings the floor/ceiling into the equation for depth based removal. Ceiling is just under 14’ up.
I would be less concerned about some of the angles if there were a more reliable way to segment out people beyond depth, but the segmented body masks from the ZED SDK occasionally lose limbs or feet or hands. Are there common background removal techniques people have used in conjunction with depth removal (working in Unity)?
Thanks!
Oh, I initially thought you wanted to do pose estimation using our body tracking module, which is indeed sensitive to the camera’s point of view.
If you plan to only do background subtraction, it should not be a problem to have the camera on top of the screen, looking downward.
By knowing the camera rotation, you can reproject the depth to align it with the floor and remove the background behind the person.
And you can use the body tracking feature to detect the person and have an idea of its distance to the screen and how far you need to remove the background.
If the camera is static, you can set the angle manually if you have a way of accurately deduce it or use the IMU data provided by the SDK to get it. The IMU will give you the orientation of the camera using the gravity vector as reference.
This is helpful thanks! Is there any way I can just mask out the floor and remove it from the image. Didn’t see anything in the SDK about that outside of some plane detection, but not sure how I would use that.
For the background subtraction, you will not directly reproject the depth value but the point cloud which is the 3D position of each pixel in the World space.
Then, the depth will be the Z component of this 3D Vector (assuming Z is forward).
To filter out the floor, you can compare the Y component (Up Axis) of each 3D Point with a certain threshold. If the pixel is below this threshold, it’s the floor so you can discard it, if not, you keep it.
Stereolabs Support
Oh gotcha misunderstood at first. I’ll give this a try!
Would you anticipate us running into issues with reflective floors? The installation will take place in a hallway with tiles that seem really polished, so there may be some reflections. Unfortunately we won’t be able to test things until we get there and set up.
Yes, it is very possible the depth accuracy will be degraded if there is too much reflection on the floor.
Unfortunately, that’s a known limitation of stereo cameras for depth sensing, and even if we have really strong depth algorithms, it still struggle for these cases.
Stereolabs Support