Hey everyone,
I’m currently working on a setup with multiple ZED 2i’s sending data to Unreal via Fusion Live Link.
A user would be able to “login” to the application via an RFID tag, and then get their personalized avatar in the 3d scene. To do that I would have to be able to determine, which of the people in the scene are who though.
My idea or hope was, that I would be able to grab the distance between difference joints, shoulder to shoulder, neck to pelvis, etc. and save those along with the users avatar. Then I could load that data and check the people in the scene to make sure each avatar is at the right person.
However I found out that depending on where in the setup a person enters the cameras view, the distances vary. Sometimes the shoulders are 35cm apart, other times (for the same person of course) they are 30 or 40cm apart.
Are there ways to have the numbers be more consistent? Or is anyone aware of other ways to determine which of the people in the scene is who? Face recognition would be optimal I supposed, but I don’t have that
Thanks!