I am conducting a technical evaluation between the ZED X (4mm P) and the ZED 2i (4mm P) to determine the optimal hardware configuration for high-velocity dynamic scene capture (target moving at 6-9 m/s).
Given that we are utilizing a workstation with an NVIDIA RTX 5090, we would like to clarify the following technical points:
1.Motion Artifacts & Shutter Impact: How does the Rolling Shutter of the ZED 2i compare to the Global Shutter of the ZED X when capturing objects moving at 7 m/s? In a high-compute environment (RTX 5090), can the Neural Depth Engine 2 effectively mitigate rolling shutter distortions, or is Global Shutter a baseline requirement for maintaining geometric integrity in such scenarios?
2.Point Cloud Consistency at Range: For a target positioned 3 to 5 meters from the camera, which sensor/lens combination provides higher point cloud density and more stable edge definition during rapid lateral movement?
3.Interface Throughput: When processing 1200p depth at maximum frame rates, does the GMSL2 interface provide a measurable advantage in terms of frame-drop resistance or data latency compared to the USB 3.0 interface of the ZED 2i?
4.Hardware Sync: For multi-camera setups, does the ZED X offer superior hardware-level frame synchronization compared to the ZED 2i’s software-based approach?
Our primary goal is to establish a high-precision baseline for 3D spatial data. Any technical insights or performance benchmarks would be extremely helpful.
It’s not possible to mitigate the Rolling Shutter effect with software processing. The only way to mitigate it is by reducing the exposure time to the minimum supported, but the final result will not be comparable with respect to a Global Shutter sensor that takes the full image in a single moment.
The 4 mm focal length optics is recommended in this case.
Yes, GMSL2 is recommended as it can provide data at 60 FPS.
So if we’re going to capture images of fast-moving human bodies, ZED 2i is it possible to complete the task, ZED X compared to ZED 2i is the commission obvious in this respect?
Sure, it does. If the exposure time is too low, the images are dark, and the visual information to perform the depth processing is degraded, resulting in degraded depth.