360 fusion network workflow

Hi All,

Hope this will be useful for all of us, due to lacking more detail in documentation on network fusion method.

So setup I have is 8 ZED2i with Jetson Xavier’s networked (attached image of the camera layout).

Aim is to use custom apps, not the cloud platform with edge agents, get all cameras into one cluster and so my questions and assumptions are:

On each Jetson run sender app with .setForLocalNetwork set and zed.startPublishing, also zed.enablePositionalTracking and zed.enableBodyTracking enabled.

Q1: Any specific PositionalTrackign or BodyTracking parameters should be set, such as body format, depth modes etc. or can it just be all on default values?
Q2: After calibration is done, shall same app be used to stream data to livelink or any other apps, or just use calibration json with livelink and switch cameras/jetsons to image Streaming modes as done in the mono livelink network setup?
Q3: Can fused data from cameras be used to retrieve fused canvas of the depth map, or this fusion workflow is just for skeletal tracking?
Q4: Do all Jetsons need to be time synchronised or fusion api will handle all of the synchronisation?

Would be useful to have a snippet of your edge agent where all setup is done, as it’s not on your repository.

Thanks in advance.
camera locations v2

Hi @Hedgehog,

Thank you for the initiative, I’ll make sure the answers to these questions find their way to the documentation.

Q1: Senders/Receivers configuration:

We currently advise, on the senders:

  • Body18 (even if you want 34, but you can use 38 if you need 38 on the final fusion)
  • enable_tracking disabled, enable_body_fitting disabled
  • Human body medium
  • Performance or Ultra depending on the desired fps
  • FPS set to a stable value. 15 if you regularly have less than 30 fps on the sender, etc.
  • Positional tracking enabled

On the receiver:

  • Tracking
  • Fitting if 34 is desired, you can’t have fitting and body 18 for now (it will give 34). (You can have 38 + fitting though)

Q2: How to send data

ZED 360 will give a configuration file.
This file can be used with the Live Link senders, or with the multi-camera samples, on the receiver’s side.
At the same time, you need to run the live link senders on each device that manages a camera (the Zedbox).

Q3: Is fused depth available

Currently, the Fusion can only merge skeletons. We have plans to implement the fused depth in a further release, but it’s not available now.
You can retrieve the individual depths of each camera if you’re in local USB workflow (not over the network) via the Fusion API through Fusion::retrieveMeasure.

Q4: Jetson time synchronization

The jetsons do need to be time synchronized, you can find documentation about it here: Setting Up Multiple 3D Cameras - Stereolabs

Sender edge agent sample

The Edge Body Tracking application is downloadable on ZEDHub, and a sender application can be made by just adding startPublishing() with the desired parameters before beginning the retrieveBodies loop on a BodyTracking sample.
We will likely add a sample for this in the SDK very soon.

Do not hesitate if you have more questions, these reports are very valuable for us.

Thank you so much for detailed reply. One remark in Q1 answer I believe that Tracking should be Enabled not Disabled (both body and positional tracking) is that correct?

So my understanding skeletal extraction happens on Jetsons (because we set body model there) ? Is that correct?

Thanks

@Hedgehog

Thanks for the feedback and quesytions, I edited my previous message for clarity.

I believe that Tracking should be Enabled not Disabled (both body and positional tracking) is that correct?

No, the body tracking module must be enabled of course but the enable_tracking variable should be set to false (if you don’t want it on the sender’s side). The Fusion ignores the tracking data from the sender and manages it on its own, from all the senders’s skeleton data.

So my understanding skeletal extraction happens on Jetsons (because we set body model there) ? Is that correct?

Yes, that is correct. The body tracking is done on the jetson’s side, and its output is sent to the Fusion module which aggregates all the skeleton data. It then outputs the merged data.
The Fusion process is lighter on the resources as it’s “only” merging some 3D points, not running AI algorithms.

I hope this helps!