Hi there! I am currently trying to implement a similar asynchronous time warp effect to compensate for camera streaming latency. And found the “ZEDMixedRealityPlugin” in the ZED Unity SDK. I am wondering how is the drift corrector implemented, and how is the latency calculated? If you can’t share the code, could you please at least share the method for the drift correction? Thank you!
Basically, we are looking for the headset position at the timestamp of the image (in the past). The headset position is either retrieved (if it exists), or is estimated if no pose is found at this timestamp. Then, we are projecting the image at this position and display it on the screen.
That’s what we do here : https://github.com/stereolabs/zed-unity/blob/master/ZEDCamera/Assets/ZED/SDK/Helpers/Scripts/ZEDManager.cs#L2452
I guess how many latency positions can be found depends on the framerate settings for the camera? And the time distance between Unity’s each frame update is different, what happens when the algorithm doesn’t find a position? Does it perform an interpolation?
Yes you can interpolate the headset position if you don’t have one at a specific timestamp.
We are collecting new headset pose at fixed rate, it’s done here : https://github.com/stereolabs/zed-unity/blob/9c342d6d204639637e769a09e6720490b6876cf4/ZEDCamera/Assets/ZED/SDK/Helpers/Scripts/ZEDManager.cs#L2543 for example.
How would Lateupdate() collect pose at a fixed rate? I guess if we use FixedUpdate the update frequency would be too low…
It’s not at fixed rate indeed, my bad on that. I meant the headset pose is retrieved every frame.