My current setup has a ZED X
and 2 ZED X One GS
operating as a stereo pair all connected to a ZED Link Quad
on an AGX Orin
.
I start the streams over local network using the streaming sender sample
and ZED_Media_Server --cli
I have 5 scripts running on the Jetson that access the streams, using their ip addresses, from either of the cameras (3 use X; 2 use X Ones), as inputs for different ML models/ processing pipelines.
- Is this a good setup to use the camera data? (When processing load is heavy, I plan to offload some of the scripts to run on another computer on the same network)
I need depth information in all the 5 scripts.
My assumption is all the scripts currently spin up their own instance of the SDK and I end up computing depth 5 times instead of 2 times(once per camera pair).
I currently don’t use ROS and cannot switch over to a ROS based stack.
-
What would be the ideal way to access depth information in all the scripts where I don’t have to compute depth on the same image pair multiple times across multiple scripts?
-
Can I somehow stream the depth along with images, using the SDK, when I start the stream?
Coz different scripts have varying processing times, they might grab different frames based on when they call sl::Camera::grab()
, so a script that’s streaming all frames and can compute depth for each of the them before streaming seems like a good solution but im not sure about how to achieve this. Does SDK support something similar?
@Myzhar any help on this would be greatly appreciated.