Best way to optimize ZED Live Link Fusion for smoother tracking with minimal latency? Unreal skeleton animation issue

Hey all,

Curious if anyone has any recommendations for the best configuration for Live Link for our particular use-case.

We’re streaming the camera data from 2 ZED2i cameras through LiveLink-Fusion at 720p 30fps running at Ultra depth mode currently. Skeletal smoothing = 0.2. This results in what we feel to be the most accurate tracking with minimal latency.

The app we’re running with Unreal quickly drops particles from the user’s avatar, and when the user swings their limbs, we want to see a sweeping arc of particles. Unfortunately, what’s actually happening is that either the camera feed or the fusion animation of the avatar (from the ZED-fusion app) is sending the skeleton data animation in a way where arc of particles instead appears in chunks, disrupting the look of a smooth arc with empty sections.
(Example below, waving hand left-to-right. Pardon my MSPaint skills.)

Normally, this wouldn’t be an issue, but for this particular app, because the particle projection is visibly missing these segments when fast motions are made, the effect of the feed lagging behind the particle projection rate makes this discrepancy far more noticeable.

My first instinct was to up the framerate of the camera feeds but running at 60fps (which actually reports as something like 45fps in the Fusion app…) results in a jittery avatar which disrupts the user experience. I suspected maybe increasing the skeleton smoothing would help with easing (at both 30 and 60fps), but this just results in added latency that makes the app feel less responsive and sluggish.

So, my question is, would there be any other configuration that works with a multi-camera setup that might fix our issue? I noticed this effect isn’t nearly as apparent when running on a single camera, but I don’t think we can sacrifice the ability to track multiple individuals that we get with using Fusion. Thanks for any suggestions.

Best,
Tristan

Hi,

If the framerate is similar between the multi-camera and the single-camera setup, you should not see any difference.

As you said, I also think the issue is coming from the “low” frame rate of the body tracking module compared to the frame rate of the engine.

If you can allow it a slight latency (one frame at 30 fps which is 33ms), you might be able to reduce this effect by interpolating the position of the particles between two frames of the camera.

Instead of updating the particle system at each new body tracking data (at 30fps), you update it at the fps of the engine and interpolate the data for the frames that are in between two instances of body tracking data, if that makes sense.

Do you think it can work?

Best regards,
Benjamin V.

Stereolabs Support