Positional Tracking: How it works and how to measure certainty?

I am planning on using ZED’s positional tracking as part of a separate kalman filter. For added context, I’m using it without Area Mapping, aka simple odometry mode, to measure pose changes from one frame to the next.

I had a few questions about how the positional tracking works in the ZED back-end so I can use it effectively in the KF:

My first question is: Does the positional tracking backend calculate odometry from the depth images, from color images, or both? It’s not clear if the backend is doing 3D scan matching from frame to frame, implementing optical flow, some combination of the two, or something else entirely.

This leads up to my second question: Is there a built-in way to determine the real-time uncertainty of the position tracking odometry from one frame to the next? I’d assume uncertainty would go hand-in-hand with the number and quality of trackable features in the image frames (depth and/or color).

1 Like