How to avoid glitchy frames?

I’m using two ZED2i cameras in very simplified setup:

  • fetch the left camera only at 640x360 resolution
  • use opencv’s background subtractor

This is on Windows 11 with Python 3.10.

I have one basic script that takes the camera id as an argument.

I am expecting two scripts launched with different camera ids to run smoothly.
What I am experiencing is glitchy frames every once in a while (e.g. 3-4 times per minute).
(The frames appears offset on the x axis by a decent amount).

I am noticing that this happens less when I run one instance of the script using one camera only.

I’ve tried changing the resolution to 672x376 instead of 640x360, but I can still see glitchy frames.
I’m only retrieving the the left image (and not right, depth or point cloud, nor the sensor data).

What can I check to ensure the image streams are stable ? (What’s the recommended method of using two cameras from two Pythong scripts (without intereference) ?)

Thank you,
George

Hello and thank you for reaching out to us,

One very probable issue would be a USB limitation. What is your connection scheme ?

  • USB hub
  • cameras connected to the same USB channel
  • cable extensions
  • ZED2i with screws not strengtened

  • It can also be caused by overload on your host machine. How are your CPU and GPU load ?

Antoine

Hi, to add to @alassagne comment, you can try to lower the framerate to reduce the bandwidth required. If you’re using 640x360 images, you can set the resolution to VGA or HD720 and the framerate to 30 for instance, like;

init = sl.InitParameters()
init.camera_resolution = sl.RESOLUTION.VGA # or sl.RESOLUTION.HD720
init.camera_fps = 30
cam = sl.Camera()
status = cam.open(init)

The resolution/framerate combo available are detailed here RESOLUTION Class Reference | API Reference | Stereolabs

@alassagne @adujardin Apologies for the delay. I had to ran some tests and gather more data.

The connections scheme turned out to be more complex:

  • Fiber Optic USB 3.0 extenders
  • cameras connected to the same USB bus
    (I couldn’t verify screws not straightened)

I’ve check overload the basic script I used (which I run twice, each with a different camera id (mimicking multiprocessing)) takes 3-6% CPU load.

I’ve ran multiple tests reducing resolution and frame rate. I was already using sl.RESOLUTION.VGA, So far I’ve managed to get no glitchy frames when using 2 cameras by reducing resolution to 320x180 and frame rate to 15fps.

I’ve checked the host PC specs motherboard datasheet and the next test will be connecting via USB directly (if possible) and using a port at the back and one at the front which should land on different USB buses.

Many thanks for the input: much appreciated!

@george.profenza Was the x axis offset similar to the effect seen in Zed2i Frame tearing - #10 by robots? Specifically was the entire frame still filled with data or were some columns entirely missing data?

@robots most of the time I’ve seen the entire frame almost swapped (left with right and vice versa). I don’t recall missing columns though.

(This was slightly different from glitches I remember back in 2016 using a ZED 1 which on top of x axis offsets also had what appeared as jpeg/compression artefacts during glictchy phases)

The solution to my specific issues with ZED2i on fiber optics USB extenders, for now was to drop frame rate to 15 and resolution to 320x180 which in my basic setup I can get away with for now, but this may not be good enough in other setups.

Update @robots I’ve read through the whole thread you’ve linked.
If you still have access to the camera and a computer with ZED SDK installed, recording an SVO will help the debugging process a lot. It will be worth keeping track if you’re recording with the precompiled ZED Explorer app or zed-examples (cpp or python) and which resolution / compression settings you’re using.

Additionally, I agree with @alassagne’s suggestion to not duplicate frame data:

  • you’re already using a class and should be able to pre-allocate the sl.Mat instances you need for the depth / bgr frames and point cloud (instead of re-allocating new mats multiple times per seconds/once per new ZED frame
  • no need for a deep copy, you should be able to use [...,3] to get a numpy “view” of the RGB (no alpha) or XYZ (nor RGB) data without duplicating data.
  • a bit off-topic, but scalene might be handy to profile the code and spot memory leaks/slow CPU areas.

FWIW, you’re dealing with 1 camera over USB directly so :crossed_fingers: hopefully overall you’ll encounter less issues. Just to rule things out, I’d keep just the 1 ZED camera attached to USB and no other devices on the USB bus and try the same tests I did :crossed_fingers:

  • 1st reducing the number streams (e.g. grabbing left camera only)
  • then reducing resolution
  • then reducing framerate
    At some point I expect the left frame to get stable. I would then slowly up rate rate/res/ one step at a time, testing to narrow down the point at which tearing starts to occur again. Hopefully this should to point to wether this is related to hardware (cables/ports/USB bus bandwidth/etc.), OS, software stack (the actual code or libraries in the middle, etc.).
    The one test I haven’t done on my side is test with a c++ sample instead of Python just to rule out yet another variable (of the many involved). (e.g. there’s the default Pythondistribution, but there other distributions (e.g. Intel’s optimised Python distribution, Python framework that ships with OS X, etc. What OS are you on btw ?)

One idea that comes to mind, if distance over USB is a problem, is doing the processing “on the edge”.
For example, you’d use a decent enough single board computer (nvidia jetson, etc.) connected straight to the ZED via USB (without any USB (active) extensions), do the image / point cloud processing on that machine then only output the resulting sparser data via ethernet to where it needs to go. (This is of course subject to budget/time and introduces yet another thing to worry about in your system, but it’s an idea to get around USB cable length issues)

HTH

Thank you for all this information! I’ll give it a try next week

1 Like

A few addition to this very precise response (Thanks @george.profenza !)

  • There should not be any low-level difference between c++ and python. The USB handling is the same. Python functions are just calling the corresponding c++ functions in the background.
  • USB is always a strong limitation. If other connectors suit you, we recently announced a GMSL camera (which do not replace current and future USB cameras). GMSL allows higher bandwith and reliability. See here : ZED X - AI Stereo Camera for Robotics | Stereolabs
1 Like