When running the Zed2i camera in our application, we frequently (multiples times per minute) get torn frames. We are running in Python 3.9 at 2k resolution and retrieving depth, rgbd, and point clouds at 1280x720. Zed diagnostic reports usb bandwidth, GPU (3070), and CPU are all Ok. We are using Zed’s included USB C cables.
We have written additional post processing steps to filter our these torn frames based on specific environmental factors and the fact that the tear always seems to occur between the 530-560 pixels on the X axis. However, this is not sufficient long term and is a blocker to full integration in our system.
I have attached a sample image and can provide more if needed
You should not get torn frames indeed. Can you send me more images and explain a bit this one ? What problem should I see ? There are actually 2 images in one, but it does not seem that it’s a left/right combination.
(Trying to line it up with the grass was a pain and is definitely not perfect)
I think it may be important to note that in robot’s picture, the blue wire is not aligned. This may point to the fact that the Left and Right images (in the torn image) are not from the same frame grab instead of just being reversed/swapped.
Thank you for these precise additions. I’m a little confused, I thought that there was a tear within a specific range of pixels (530 to 560), but was I actually understand now is that the left and right image are swapped ? is that it ?
Can you tell me precisely how you obtained your pictures ? Maybe you can share your code ? It may bring some light. I don’t get why you get a left side and a right side for example. The left and right image from the ZED camera show the same field of view, you cannot put them side by side without cutting them manually. And since L and R images are not taken from the same point of view, you cannot expect continuity between them.
The “frame tearing” @robots and I were referring to is the Left and Right image getting swapped (“frame tearing” was probably was not the best descriptor for it with what we know now).
I also think you’re right that the reason I could not get the images to “match up” was not due to being grabbed at different times, but because the FOVs and positional offsets would show different perspectives of the environment.
We have seen this behavior when:
Recording an SVO using the code (with minor parameter changes) from zed_examples (CPP)
Using ZED-GStreamer (CPP) to grab Left + Depth frames and then use a Gstreamer filesink to save the Left frames to an mp4 video.
Using ZED-GStreamer (CPP) to grab Left + Depth frames and then display to screen using a display sink
Note: For (1), we generally “view” our SVOs by extracting them into frames (also using code from zed_examples, CPP), so I will admit that I cannot verify the SVO also shows this behavior. I also cannot view the SVO right now as the computer I have at the moment does not have the SDK downloaded on it (and cannot). If the SVO does not display the L/R swap, then the commonality between 1/2/3 would be the Camera.grab(). Just an idea
We can send you the code we used for (1) (but not (2/3)) and an example SVO where this occurred if that would help
Thank you for the explanation. I’ll gladly take a SVO that has swapped frames to investigate further.
If it’s the SVO, the answer will be pretty straightforward, there is a bug somewhere. It can be easy to check : the depth and point cloud should be very strange on these frames.
If it’s not in the SVO, it’s probably happening later in your code.
What would be the best way to send you the SVO? It’s about 2 GB so I can’t attach it to a message. I’m on my work computer atm so I can’t install ZED SDK to view/cut it due to permissions restrictions.
It does seem like the left and right images are appearing in the same frame and being joined between 530 and 560. The simplest way to capture the “torn” frames is just running the camera, saving the images to a folder, and then sorting through them.
@alassagne Is there a private method for me to send additional images? I can also provide the “torn” depth frames saved as numpy arrays if that’s helpful.
Hello, I guess you can use WeTransfer or Google Drive ?
Can’t you send a SVO recording of your camera ? It will contain the real output of the camera, without the errors that you code could have added. Then we can just check image by image until we find swapped ones.
By curiosity why are you using deep copies ? You lose performance.
Also, you mentioned 530 and 560 again, what are these numbers ?
We don’t use SVO recordings so I don’t have any to send you. Regarding sending the images I can upload them to Google Drive, but I am still unsure how to send them to you privately without posting them publicly here. Do you have an email address or does this forum have direct messaging functionality?
I had a look at your SVO. Everything is clear now. As you found out yourself looking at the other thread, your issue looks like a problem with USB. The SDK already filters a lot of these problems, but there are still a few that we don’t catch, like these.
A few things that can be tried :
Have the USB connection as clean as possible : no hub, no other devices, no adapters…
We had started looking at some of the responses @robots was also discussing in and are probably/definitely facing the same issue. We have multiple Nvidia Xavier NX onboard different quadcopters. Our testbed platform also has USB Keyboard and Mouse dongles in the USB ports which interestingly, as I start to review some of our videos per-camera, is the only platform which has the frame swapping.
We are tracking the ZED-X release for next year and plan to upgrade our current systems to that camera. The GMSL cable is the permanent solution we have been looking for to fix USB disconnection issues (onboard high-vibration quadcopter). Since the frame swapping is non-critical for us and we know how to mitigate it, we will just hold for the ZED-X.