Using the IMU Pose.Translation data

Hello,

I’m trying to use the translation data provided by the IMU sensors to determine the position of our ZED 2i in free space. Since we can’t guarantee that the camera always has a clear sight, this seems to be a better option compared to the typical positional tracking.
I’m accessing the sensor data like this:

if zed.get_sensors_data(sensors_data, sl.TIME_REFERENCE.CURRENT) == sl.ERROR_CODE.SUCCESS :
    transl = sensors_data.get_imu_data().get_pose().get_translation().get()

This works well so far and I get a numpy array. But the values of the individual axes are very small, so that the values already fluctuate very strongly due to general noise alone. Rotating the camera, on the other hand, changes the values a lot (which isn’t the point, is it?). Did I understand the use of translation correctly or is it not suitable for positions and I should focus on positional tracking?

Thanks a lot!

Hi @Asimovcowitz1
what you are reporting is the main reason why sensors fusion is the best approach to retrieve reliable positional tracking information.
IMU values are deeply affected by noises and fixing them with visual odometry information generates reliable attitude values.
Naturally, the same sentence is valid for visual odometry with respect to inertial attitude estimation.

Okay, thanks for the explanation. Are the values provided by the positional tracking module a result of IMU values and visual odometry or are they just based on the camera’s sight? Is it possible to combine both in the API in a meaningful way?