Hello,
I am using a Zed X Mini and a Zed Box (22.04, ROS2 Humble). I am using the latest version of the SDK. I am trying to stream the video feed from ROS2 to another machine that is running Unity. I am having a hard time locating any examples/resources for how to do stream via ROS2. Is this possible to do? Any resources or advice would be appreciated.
Thanks.
Myzhar
November 25, 2024, 6:52pm
2
Hi @fallygal
you can use any kind of ROS 2 package that performs this task.
For example, you can try this .
So I can use this (or similar tool) to stream the video to Unity in order to do work with the ZED SDK in Unity based on the stream?
Apologies if I wasn’t specific enough but I need to:
stream from ROS 2/zed box env
have that stream be received in Unity via ZED SDK
call on StartSpatialMapping() from Unity and ZED SDK to have the camera spatial map an area.
Why is there no option to stream using the zed-ros-wrapper like what is included here Local Video Streaming - Stereolabs
This is what we were doing in our application but I want to have ROS2 also aware of all ZED data so I have been trying to find a streaming solution within zed-ros-wrapper
Myzhar
November 25, 2024, 7:22pm
4
No, if you need the ZED SDK Stream as an input for Unity, you must start the a ZED SDK Stream from the ZED ROS2 Node by calling the enable_streaming
service or by setting the stream_server.stream_enabled
parameter to true
:
allow_reduced_precision_inference: false # Allow inference to run at a lower precision to improve runtime and memory usage
max_range: 20.0 # [m] Defines a upper depth range for detections
body_kp_selection: "FULL" # 'FULL', 'UPPER_BODY'
enable_body_fitting: false # Defines if the body fitting will be applied
enable_tracking: true # Defines if the object detection will track objects across images flow
prediction_timeout_s: 0.5 # During this time [sec], the skeleton will have OK state even if it is not detected. Set this parameter to 0 to disable SDK predictions
confidence_threshold: 50.0 # [DYNAMIC] - Minimum value of the detection confidence of skeleton key points [0,99]
minimum_keypoints_threshold: 5 # [DYNAMIC] - Minimum number of skeleton key points to be detected for a valid skeleton
stream_server:
stream_enabled: false # enable the streaming server when the camera is open
codec: 'H264' # different encoding types for image streaming: 'H264', 'H265'
port: 30000 # Port used for streaming. Port must be an even number. Any odd number will be rejected.
bitrate: 12500 # [1000 - 60000] Streaming bitrate (in Kbits/s) used for streaming. See https://www.stereolabs.com/docs/api/structsl_1_1StreamingParameters.html#a873ba9440e3e9786eb1476a3bfa536d0
gop_size: -1 # [max 256] The GOP size determines the maximum distance between IDR/I-frames. Very high GOP size will result in slightly more efficient compression, especially on static scenes. But latency will increase.
adaptative_bitrate: false # Bitrate will be adjusted depending the number of packet dropped during streaming. If activated, the bitrate can vary between [bitrate/4, bitrate].
chunk_size: 16084 # [1024 - 65000] Stream buffers are divided into X number of chunks where each chunk is chunk_size bytes long. You can lower chunk_size value if network generates a lot of packet lost: this will generates more chunk for a single image, but each chunk sent will be lighter to avoid inside-chunk corruption. Increasing this value can decrease latency.
target_framerate: 0 # Framerate for the streaming output. This framerate must be below or equal to the camera framerate. Allowed framerates are 15, 30, 60 or 100 if possible. Any other values will be discarded and camera FPS will be taken.
advanced: # WARNING: do not modify unless you are confident of what you are doing
# Reference documentation: https://man7.org/linux/man-pages/man7/sched.7.html