In our application, a ZED2 is mounted on a robotic arm and it’s job is to do a spatial mapping of the robots workspace to avoid collisions later on. Using the robots kinematic, we have a very accurate position of the camera available. Is it possible to feed this information into the camera to improve spatial mapping? Like this, we could get rid of the drift issue.
Thanks for the help,
Marcel
Hi Marcel,
you can restart the positional tracking algorithm with the new known pose as init_position
value in the PositionalTrackingParameters
structure.
The problem is that when you restart the positional tracking module also the 3D map is reset.
@SL-PY can you confirm this behavior?
I actually would like to feed our position of the camera into the spatial mapping algorithm, as it’s probably (a lot?) more accurate than what the camera itself can estimate.