Zed2i Frame tearing

Hello,

When running the Zed2i camera in our application, we frequently (multiples times per minute) get torn frames. We are running in Python 3.9 at 2k resolution and retrieving depth, rgbd, and point clouds at 1280x720. Zed diagnostic reports usb bandwidth, GPU (3070), and CPU are all Ok. We are using Zed’s included USB C cables.

We have written additional post processing steps to filter our these torn frames based on specific environmental factors and the fact that the tear always seems to occur between the 530-560 pixels on the X axis. However, this is not sufficient long term and is a blocker to full integration in our system.

I have attached a sample image and can provide more if needed

1 Like

Hello and thank you for reaching out to us,

You should not get torn frames indeed. Can you send me more images and explain a bit this one ? What problem should I see ? There are actually 2 images in one, but it does not seem that it’s a left/right combination.

Antoine

We have also been seeing this frame tearing for a while but weren’t sure if it was caused by our setup (non-ZED USB cable, high vibration environment). We have been running with:

  • GStreamer (CPP code)
  • ZED2i
  • Jetpack 4.6.0
  • ZED SDK 3.8.2 (but have seen behavior since we started using ZED back on 3.7.2)
  • 1080p

We have seen this behavior also when using the ZED SDK to record an svo video as well

The image @robots posted is actually a L/R combination, just partial and reversed (which is exactly what we see) as well:

@robot 's image swapped:

Our image/example:


(Trying to line it up with the grass was a pain and is definitely not perfect)

I think it may be important to note that in robot’s picture, the blue wire is not aligned. This may point to the fact that the Left and Right images (in the torn image) are not from the same frame grab instead of just being reversed/swapped.

1 Like

Thank you for these precise additions. I’m a little confused, I thought that there was a tear within a specific range of pixels (530 to 560), but was I actually understand now is that the left and right image are swapped ? is that it ?

Can you tell me precisely how you obtained your pictures ? Maybe you can share your code ? It may bring some light. I don’t get why you get a left side and a right side for example. The left and right image from the ZED camera show the same field of view, you cannot put them side by side without cutting them manually. And since L and R images are not taken from the same point of view, you cannot expect continuity between them.

Antoine

Hi @alassagne,

The “frame tearing” @robots and I were referring to is the Left and Right image getting swapped (“frame tearing” was probably was not the best descriptor for it with what we know now).

I also think you’re right that the reason I could not get the images to “match up” was not due to being grabbed at different times, but because the FOVs and positional offsets would show different perspectives of the environment.

We have seen this behavior when:

  1. Recording an SVO using the code (with minor parameter changes) from zed_examples (CPP)
  2. Using ZED-GStreamer (CPP) to grab Left + Depth frames and then use a Gstreamer filesink to save the Left frames to an mp4 video.
  3. Using ZED-GStreamer (CPP) to grab Left + Depth frames and then display to screen using a display sink

Note: For (1), we generally “view” our SVOs by extracting them into frames (also using code from zed_examples, CPP), so I will admit that I cannot verify the SVO also shows this behavior. I also cannot view the SVO right now as the computer I have at the moment does not have the SDK downloaded on it (and cannot). If the SVO does not display the L/R swap, then the commonality between 1/2/3 would be the Camera.grab(). Just an idea

We can send you the code we used for (1) (but not (2/3)) and an example SVO where this occurred if that would help

Hi,

Thank you for the explanation. I’ll gladly take a SVO that has swapped frames to investigate further.

If it’s the SVO, the answer will be pretty straightforward, there is a bug somewhere. It can be easy to check : the depth and point cloud should be very strange on these frames.
If it’s not in the SVO, it’s probably happening later in your code.

Antoine

What would be the best way to send you the SVO? It’s about 2 GB so I can’t attach it to a message. I’m on my work computer atm so I can’t install ZED SDK to view/cut it due to permissions restrictions.

Thanks @penguin for all this info!

A few details and differences in our process

  1. We exclusively use python with 3.8.2 for capture
  2. We are not using SVO recordings anywhere. Our capture code is essentially
...
            while frame_request_count < frame_request_limit:
                frame_request_count += 1

                # Grab frames from Zed until success
                cam_grab = self.camera.grab(self.runtime_depth_params)
                if cam_grab != sl.ERROR_CODE.SUCCESS:
                    continue
...
                bgr_frame = self.get_bgr_frame()
                depth_frame = self.get_depth_frame()
                point_cloud = self.get_point_cloud()
...
    def get_bgr_frame(self) -> np.ndarray:
        mat = sl.Mat(self.frame_grab_res.width, self.frame_grab_res.height)
        self.camera.retrieve_image(mat, self.view_mode, sl.MEM.CPU, self.frame_grab_res)
        bgr_frame: np.ndarray = mat.get_data(deep_copy=True)[..., :3]  # remove alpha channel

        return bgr_frame
  
  def get_depth_frame(self) -> np.ndarray:
        mat = sl.Mat(self.frame_grab_res.width, self.frame_grab_res.height)
        self.camera.retrieve_measure(mat, sl.MEASURE.DEPTH, sl.MEM.CPU, self.frame_grab_res)
        depth_frame: np.ndarray = mat.get_data(deep_copy=True)

        return depth_frame
...
    def get_point_cloud(self) -> np.ndarray:
        mat = sl.Mat(self.frame_grab_res.width, self.frame_grab_res.height)
        self.camera.retrieve_measure(mat, sl.MEASURE.XYZ, sl.MEM.CPU, self.frame_grab_res)
        point_cloud: np.ndarray = mat.get_data(deep_copy=True)[..., :3]  # remove color channel

        return point_cloud

view is always

sl.VIEW.LEFT

It does seem like the left and right images are appearing in the same frame and being joined between 530 and 560. The simplest way to capture the “torn” frames is just running the camera, saving the images to a folder, and then sorting through them.

@alassagne Is there a private method for me to send additional images? I can also provide the “torn” depth frames saved as numpy arrays if that’s helpful.

Hello, I guess you can use WeTransfer or Google Drive ?
Can’t you send a SVO recording of your camera ? It will contain the real output of the camera, without the errors that you code could have added. Then we can just check image by image until we find swapped ones.

By curiosity why are you using deep copies ? You lose performance.

Also, you mentioned 530 and 560 again, what are these numbers ?

We are retrieving images at 1280x720. The vertical line where the image “tears” is around the x=530-560.

We don’t use SVO recordings so I don’t have any to send you. Regarding sending the images I can upload them to Google Drive, but I am still unsure how to send them to you privately without posting them publicly here. Do you have an email address or does this forum have direct messaging functionality?

Regarding the deepcopy we were following the guidance in https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1Mat.html#a2e8f08eb3ebc14e70692b0a3eecd6756

deep_copy	: defines if the memory of the Mat need to be duplicated or not. The fastest is deep_copy at False but the sl::Mat memory must not be released to use the numpy array.

How can we ensure that sl::Mat memory doesn’t get released in Python without deepcopy?

Hi,

I had a look at your SVO. Everything is clear now. As you found out yourself looking at the other thread, your issue looks like a problem with USB. The SDK already filters a lot of these problems, but there are still a few that we don’t catch, like these.
A few things that can be tried :

  • Have the USB connection as clean as possible : no hub, no other devices, no adapters…
  • Have enough power on it
  • Reduce the resolution

Then, our long term and robust solution would be to use a GMSL conneciton instead of a USB one. See the ZED X we announced recently. ZED X - AI Stereo Camera for Robotics | Stereolabs

1 Like

Thanks for the response @alassagne

We had started looking at some of the responses @robots was also discussing in and are probably/definitely facing the same issue. We have multiple Nvidia Xavier NX onboard different quadcopters. Our testbed platform also has USB Keyboard and Mouse dongles in the USB ports which interestingly, as I start to review some of our videos per-camera, is the only platform which has the frame swapping.

We are tracking the ZED-X release for next year and plan to upgrade our current systems to that camera. The GMSL cable is the permanent solution we have been looking for to fix USB disconnection issues (onboard high-vibration quadcopter). Since the frame swapping is non-critical for us and we know how to mitigate it, we will just hold for the ZED-X.

Thanks for your help!

1 Like