Hi Team,
I’m working with the ZED 2i camera and have set the capture FPS to 60, but in practice, I’m observing a significantly reduced frame rate of around 3 FPS.
- I’m using
grab()
followed by fusion operations (IMU, magnetometer, GPS) and running YOLO-based inference on each frame. - These downstream processing steps (fusion + inference) are taking longer than the camera’s frame interval, resulting in dropped frames.
- I’m aware that
grab()
drops frames if not called in real-time, as per documentation. - The camera is mounted on a moving vehicle, so the low frame rate results in loss of critical spatial and temporal data, which affects downstream geolocation and asset detection accuracy.
My Questions:
- What are the best practices to avoid losing frames while performing heavy computation (fusion + inference)?
- Is there a recommended multi-threading or queue-based strategy to decouple
grab()
from post-processing? - Can we access raw camera buffers asynchronously or cache them before processing to maintain real-time behavior?
- Would using an asynchronous pipeline with
grab()
in a producer thread and fusion/inference in consumer threads be viable with the SDK?
Any guidance on maintaining high frame rate while still running complex inference would be greatly appreciated.
Thanks!