Synchronization of two ZEDX cameras

Dear support,

I am working on a ros2 application in which I launch both Nvidia’s ESSDisparityNode and zed_isaac_ros_multi_camera from the zed-ros2-examples package you have provided. I have 2x ZEDX cameras connected to my Jetson AGX Orin, to be able to use a custom baseline of ~24cm or more.

With a few tweaks and topic remappings, everything seems to launch properly, and on the same composable node container. I use ros isaac ImageConverterNode to convert from bgra8 to bgr8, then resize the images, then feed them into the model.

I have this problem however: The published /disparity seems to be reporting very scarcely, or not at all, when I echo the topic. This usually points to a synchronization issue of the left and right images. I think NVIDIA’s nitros nodes use Exact time synchronization.

Is there a way to syncrhonize 2 ZEDX cameras? Is that already done on a hardware level?

Any ideas and help appreciated :smile:

Normally, the ZED X Cameras are hardware-synchronised; you do not need to enable anything, unless you are using a ZED Link Quad capture card and the cameras are plugged into different connectors:
ZED Link Quad on AGX Orin - Stereolabs.

In any case, I have not understood your configuration: are you trying to obtain depth information from multiple ZED X One cameras or from a single ZED X camera?

Please explain what you want to achieve because I believe you are missing some important pieces of information.

Hi Myzar, thanks for the quick reply.

No I don’t really plan on using the depth from these ZEDx cameras, I only need their raw (rectified) images to feed the AI model for depth perception that I want to test. My plan is essentially this :

→Get leftmost camera image from my left zedx camera and rightmost camera image from my right zedx camera → Use ImageFormatConverterNode from Nitros to change to bgr8 format → Use ResizeNode from Nitros to prepare the images for the model → Run Nvidia’s ESSDisparityNode to generate the disparity image.

I am aware of a “difficulty” already related to the published baseline within the P matrix, in the CameraInfo ros message.This is by default 0.12, which is the true baseline of a single camera. I am investigating possible solution for that as well, but I was mostly worried about absolute synchronization of the image messages first. It’s good to know that synchronization should not be the problem.

Thanks for your help.

To perform this type of process you need a valid Stereo Calibration.
What’s the resulting baseline?

That’s true, but that’s something I think of doing after I get some results- even of poor quality. The resulting baseline we want to test is anything between 24-30cm, we don’t have the cameras properly mounted yet.

My idea here is that the problem is not hardware, but software.

You should find a way to change this behavior because even if the cameras are triggered at the same moment and the images are acquired synchronously, data is serialized, and timestamps are assigned when the images are available in the buffer, so they can be slightly different.
As long as the timestamp difference is lower than a grab period (e.g, 33 msec @ 30 Hz) you can consider frames synchronized.

1 Like

It’s a good idea, I ll look into it and perhaps get back to you if needed.

Thanks!

1 Like