Unexpectedly high latency and slow frame rate

TLDR: ZedX latency is 10x higher and framerate is 4x slower than expected.

Overall the Zed cameras are very impressive, however our measurements of latency and framerate do not match advertised speeds. Our testing approach and results are below.

I would like to understand if these speeds are expected and get your suggestions on how we can improve capture time. In particular, is there anyway to run with the neural depth plus model but still get the RGB data quickly. The depth model has a huge impact on FPS and latency, but is only needed for certain sections of our pipeline. Getting the RGB data faster would be very beneficial.

Setup
All tests were performed on a jetson agx running at MAXN power with a quad capture card installed ZED Link Capture Card | Stereolabs. Nothing else was running on the computer

Latency Testing Approach


Latency Results



Based on GMSL ZED x vs regular zed and usb cable --- latency - #2 by Myzhar at 60 FPS we’d expect a latency of 17-35ms. In practice we observe up to 350+ms

FPS Approach
Measure how many framsets we can capture per second in python.

FPS Results

With the neural depth plus model and 1 cameras we only achieved a frames per second rate of 14.46 which is ~1/4 the advertised rate.

Related post: Zed X / GMSL2 Latency

2 Likes

Hi @robots
Thank you for your report.
We’ll take the time to analyze all the useful and detailed information you provided to evaluate the behaviors of our GMSL2 cameras and identify any potential bottlenecks.

It is not required that you perform the depth estimation processing for each frame if you do not need depth information for each frame.
You can disable depth processing by setting to false the value of RuntimeParameters::enable_depth when calling the grab function.

Please note that this will disable the processing for all the modules that require depth information.

We noticed unexpected latency values in your report also for USB3 cameras.
I recommend you use a simpler method to evaluate latency:

  • start a stopwatch application with subsecond precision or open this link

  • point the camera to the screen and start a camera viewer (ZED Explorer, ZED Depth Viewer, …) such that both the stopwatch app and the stopwatch camera stream are visible on the screen:

  • start the stopwatch

  • take a few screenshots

The time difference between the stopwatch app and the ZED tool view is the current latency

We also did tests with the stopwatch + ZED_EXPLORER approach and got similar results. We then switched to the arduino based test to eliminate latency from the monitor display and ZED_EXPLORER.

Great suggestion. Is there anyway to call grab twice on the same image? Once with depth processing enabled and another with it disabled? This would let us quickly get the RGB image and use it for the “quick” pipeline and then subsequently get the depth + rgb and use that for the “slow” pipeline that requires RGBD. I assume this two step capture would be slower than just getting RGBD, but it could be faster for us at a system level.

Alternatively can we quickly switch between depth modes as a runtime parameter? For some use cases PERFORMANCE or ULTRA would be sufficient and for others we need the quality from NEURAL+

Thanks!

This is not possible. grab is a blocking function that waits for a new frame couple and performs all the processing required by the pipeline.
Consecutive grab calls generate the retrieval of consecutive frames.

This is not possible. The depth mode is an open initialization parameter and can only changed by closing and re-opening the camera connection.

1 Like