I’m looking at the Fusion multi-camera body tracking example, and trying to find the complimentary example that would run on the networked computers with the sensors. Is this the same as the camera_streaming “sender” example, or is there a different network sender that works exclusively with Fusion?
The sender sample will allow you to stream the rgb images from a pc to another.
With the body tracking fusion, we directly send the skeleton data which allow you to perform the skeleton tracking directly on the pc the camera is connected to and also speed up the process as the data will be much lighter and do not need to be compressed/decompressed, unlike the images.
To enable this, you only need to add a call to the startPublishing method in your body tracking sample, that will automatically send the skeleton data over the network.
You can take a look at our body tracking fusion sample, especially the “client publisher” class (https://github.com/stereolabs/zed-sdk/blob/master/body%20tracking/multi-camera/cpp/src/ClientPublisher.cpp). this is exactly what you need to run on the computers where the cameras are connected to.
Thanks for the reply!
I’m using Python for our project at the moment. Is startPublishing() and the ClientPublisher example available in the Python API? I looked at the Python BodyTracking example, and it seems more locally-targeted, but also like it could be pared down to do the same thing, provided the startPublishing call is available.
Yes it is also available in Python. You can directly take a look at this sample : https://github.com/stereolabs/zed-sdk/blob/master/body%20tracking/multi-camera/python/fused_cameras.py.
The StartPublishing is called here : https://github.com/stereolabs/zed-sdk/blob/master/body%20tracking/multi-camera/python/fused_cameras.py#L100
Sorry for any confusion: I was asking if the Sender code was available in Python. The link above appears to be the fusion code for the server, whereas the cpp example had both a server receiver and a client sender.
Yes, in Python, both senders and receivers are implemented in the same code.
Could you clarify what code needs to be running on the senders in order for them to send data? The sample appears to just be code for the fusion server, but maybe I’m misunderstanding.
To put it differently: when the sample calls “subscribe” on the publisher from the fusion configuration, what piece of code receives that request on the subscriber side?
Based on other replies in other threads where I was discussing this with Ben, I came to the conclusion that the sender/publisher computers need to be running the camera streaming sender sample, and the receiver needs to run the multi-camera body tracking fusion sample, as noted above. The fusion config for all four cameras needs to be setup in Zed360 before they can be used by the multi-camera body tracking sample.