Hi again
I did not say that it does not work, just that it is not very efficient. You have senders that create sl.Camera
and streams only RGB, then a receiver that create sl.Camera
again to compute the depth and, body tracking, etc., then the receiver also create a sl.Fusion
object.
The recommended way would be to have your sender directly publish the data to the fusion, without the streaming, and the receiver only runs Fusion. The sample you are looking at is made for a USB workflow, where all the cameras are plugged to the same machine, this is why it does everything itself.
Unfortunately we don’t have a sample that demonstrates what I’m saying, but it is quite easy to build yourself. The sender is just like the tutorial 2, but it calls startPublishing
. The receiver is just like the fusion sample you have, without the part where it starts cameras.