Questions Regarding ZED Skeleton Tracking

Hi all! Our team is trying to use the ZED2i for skeleton tracking of a single participant in a 15m x 5m area. We attempted to use a single ZED and ZED360 fusion, but neither was robust enough for our needs. The attached video presents our current setups and the issues we’ve encountered. We are trying to determine whether our setup has major flaws or if there is a hardware bottleneck that we are unaware of. Thank you!

Hardware: i9 13900k + RTX4090
Pipeline: ZED fusion streamed to Unity (multi-cam) or ZED Unity SDK (single cam)

Hi @MarioYang!

I’m reviewing the video right now, the project looks even cooler in it.

That being said, I’m taking note of the issues you mention, but I have some questions, the same as in your last post, before being able to move forward:

  • What version of the ZED SDK are you using?
  • Can you send a ZED Diagnostic result file? It’ll give us more info on your exact setup and the state of the SDK.

1.In ZED360, What can we do if the camera previews don’t reflect real-world position and rotation, even after multiple calibration attempts?


For the calibration issues:

  • Could you send us SVOs replicating your calibration process? You can record them using our multi-camera sample (here are instructions to build it on Windows). This would be SVOs starting at about the same time and seeing only one person, for 30 seconds to 1 minute. You can also use several instances of ZED Explorer.
  • How much discrepancy are you observing? With ~1 minute of calibration, you should have a pretty close setup.
  • How many fps do you have in ZED360? You can enable the metrics in the bottom-right of the window. If 15fps is not reached, the calibration quality can be impacted.

I don’t think of obvious bottlenecks/issues in your setup, except maybe the windows. Light and reflections might be tricky for the cameras to handle in some situations.

  • Can you please also provide an SVO of the camera that would see the windows the most in the daytime? I’d like to check the depth quality in that situation.

I know I’m asking for a lot, but I’m hopeful we’ll converge to a solution with these elements.

Hey @JPlou,

This is David(from the video), I am working with Mario on this project.

First, to answer some of your questions:

  • What version of the ZED SDK are you using?

We are using 4.0.8. (See Diagnostics report)

  • How many fps do you have in ZED360? You can enable the metrics in the bottom-right of the window. If 15fps is not reached, the calibration quality can be impacted.

We consistently run at 29/30fps.

  • How much discrepancy are you observing? With ~1 minute of calibration, you should have a pretty close setup.

Its hard to say. After about one hour the tracked skeleton will start to jump between the different tracked skeletons. Which is maybe a 15cm=>20cm offset.

I don’t think of obvious bottlenecks/issues in your setup, except maybe the windows. Light and reflections might be tricky for the cameras to handle in some situations.

This is indeed a down side from where we are as the light situation changes significantly over time. The best results might be with light overcast. Could this be the issue we are seeing? Should we get flood lights and block out the sun better.

Second, here are the recordings and files:
I am not allowed to post more then two links so here is one Google Drive folder with all files.

  • One folder with a one minute recording of all 4 cameras.
  • Screen recording of how ZED360 it looks when the calibration does not improve.
  • The corresponding calibration file
1 Like

Hi @MarioYang, @daGoedie.

Thanks a lot for the data. We’ve reproduced the issue of the calibration not converging thanks to your SVOs and calib file and are investigating it. It looks exactly like the video you sent along.

I’ll keep you updated, thanks again for the report.

1 Like

Hello @MarioYang, @daGoedie,

The calibration not converging issue is identified and fixed on our side, we’re running QA tests and it should be integrated in the next release which will be out very shortly (I’m talking days, not weeks).

We’re actively working on a continuous calibration solution so that multiple calibrations are not needed if the setup does not move. This is an important feature, but also a new one that will necessitate some time to develop and validate. We know it’s important for you and all projects that use multi-camera setups over long periods, so thanks again, again, for highlighting this.

1 Like

Hey @JPlou

Thank you for testing and improving. Looking forward to test the new version.

The issue you noticed and now fixed would this also have an impact on overall tracking performance? That is tracking jumping between different tracked skeletons.

I also wanted to ask about the camera orientation setup. We need a long tracking setup 20x5 meters. currently we have them set up in this staggered setup. (As you can see in the Videos I sent.) Would you have additional advice to give us, other cameras poses to use, blacking out the windows, using additional flood lights, or switching to the distributed compute approach (We did order one of the ZED Orin boxes and are deploying the other cameras with the Nvidia Jetsons).

Just for context it is being integrated into a VR local-multiplayer unity “game”.

Thanks for your help again and looking into our issue.

Best
David

P.S. If you have a scheduled release date please let me know!

Hi @daGoedie,

The issue you noticed and now fixed would this also have an impact on overall tracking performance? That is tracking jumping between different tracked skeletons.

If you’re talking about the skeleton of one person stuttering between seemingly the skeletons from two cameras offset by 10-20cm, it should improve this because there will be less discrepancy between the skeletons of 1 same subject from several cameras.

Or are you talking about “ID-jumping”, like when having 2 persons really close together the Fusion can mix them up in some cases? This is more caused by occlusion and should not be too common.

I also wanted to ask about the camera orientation setup. We need a long tracking setup 20x5 meters. currently we have them set up in this staggered setup. (As you can see in the Videos I sent.) Would you have additional advice to give us, other cameras poses to use, blacking out the windows, using additional flood lights, or switching to the distributed compute approach (We did order one of the ZED Orin boxes and are deploying the other cameras with the Nvidia Jetsons).

The range of the body tracking of each camera, for optimal results, is about 8m. So for a 5x20m area, you will most probably need at least 2 more cameras.
I would do a setup like this, but there could be other solutions, for example, if you’re able to have the cameras not directly on the edges of the area.

We’re working on a tool that will help design a setup with specific cameras, we’ll get it out there as soon as possible. Like, really soon, because we understand that designing these spaces is painful at the moment.

Blacking out the windows or having more controlled lighting would help if the depth is bad on the cameras seeing the windows.
Using the distributed workflow will probably be your only way of adding more cameras so I indeed advise looking into it.

Can I ask how many people should be tracked in this area at once? Depending on this, you may want to add more cameras to reduce the occlusions, or the opposite, have less cameras if one person only is to be tracked (not reduce much, because your area is huge, but still).

I don’t have a release time to give more precise than “in the coming days”, sorry!

Hey @JPlou

thanks for your quick and insightful response.

If you’re talking about the skeleton of one person stuttering between seemingly the skeletons from two cameras offset by 10-20cm, it should improve this because there will be less discrepancy between the skeletons of 1 same subject from several cameras.

Yes this is what I mean. and its good to hear that it will improve.


Can I ask how many people should be tracked in this area at once? Depending on this, you may want to add more cameras to reduce the occlusions, or the opposite, have less cameras if one person only is to be tracked (not reduce much, because your area is huge, but still).

We only need to track one person for this setup.

Thank you for the other advise and I’ll look out for the next release version.


Edit:
Moving forward to running on the edge. I am getting this error when trying to start the “Edge body tracking” through the ZED hub ( Which btw. is an awesome tool)
Log:

[Info] [2023/12/Fr 08:10:27 pm] Waiting for device to start the operation...
[Info] [2023/12/Fr 08:10:33 pm] Downloading application Edge Body Tracking 0.1.8...
[Info] [2023/12/Fr 08:10:33 pm] Deploy Edge Body Tracking 0.1.8.
[Info] [2023/12/Fr 08:10:34 pm] Application Edge Body Tracking 0.1.8 downloaded successfully.
[Error] [2023/12/Fr 08:10:34 pm] Cannot deploy application. L4T R35.4 is not supported by the application.

Host details:


Host
Product
desktop
Operating System
Ubuntu 20.04.6 LTS
Kernel version
5.10.120-tegra
L4T release
L4T R35.4.1
CPU model
ARMv8 Processor rev 1 (v8l) (aarch64)
GPU model
OrinAmpere
CPU memory
6 GB
GPU memory
6 GB
NVP model
15W
System Clock

I am happy to post these questions in a new thread, if you prefer.

Hello @daGoedie,

Please do post them in a new thread, thank you very much. :slight_smile:

Done:
Its here:
https://community.stereolabs.com/t/using-zed-hub-to-deploy-install-body-skeleton-tracking-fails-with-an-error

1 Like

Hey any updates on when you might have the next SDK version released?
I finishing up other development packages but need to return to this issues in about a week.

Hey, how is it going?

@daGoedie

Hey, so, it turns out I was made a liar with this “very shortly”.
I’m very sorry for that, we’ve been working on a lot for this release, and it had to be delayed.

I will refrain from giving an ETA, but I’ll say we’re currently in QA and the version is API and feature frozen.

Again, sorry for the inconvenience, and thank you for your understanding.

1 Like