How does Positional Tracking handle losing location?

I’m currently using the ZED2i stereo camera with the ZED SDK (v 3.8, Python API). For my application I need to track the pose of the camera and for this I’m using the Positional Tracking feature. IMU fusion is currently enabled.

In some cases the camera is recording footage against an almost uniform background and gets stuck in the “SEARCHING” state for, let’s say, 10-20 frames at a time. I’m not able to share the footage, but you can consider it something like a long wall with a door.

In a situation like this, how is calculating the pose estimate carried out? I’m definitely getting pose updates. Is it that the IMU data is being using alone in this scenario where visual odometry fails or is it something else that is happening that can account for this?

I’m currently considering how to improve tracking performance so understanding this would be helpful (I’m not able to build a map before carrying out recording, simply due to the use case).


When the SDk detects uncertainties in the positionnal tracking, the state becomes SEARCHING.
It still continues the tracking, but the new poses may be relative to a false pose.


What frame is the pose updated with respect to, internally? Are new poses relative to the pose from one frame before?