How to compensate for camera motion in velocity estimates?

Hello all!
I’m using a custom detector to track objects and get their 3d position and velocity, however, the velocity returned does not account for camera motion. For example, if the camera were to rotate around the Z axis, the x axis velocity of traced objects would be significantly non-zero, even if the object is stationary (both IRL and as reported by the SDK’s object action state).

Unfortunately, I can not use the global reference frame for all estimates as the object position estimates need to be relative to the camera (we have a separate vision system for localization/platform vel estimation). Are there any other possible solutions for this? We would like to still have the velocity estimates be camera relative (or i suppose simply in its coordinate system) but in a way that accounts for the camera’s motion.

Thank you in advance!

Hi @Roman
with the default settings, the status of the objects is returned in “camera frame”.
To compensate for camera movements you must get object pose and velocities in the “world frame”.
Look at the documentation of the parameter RuntimeParameters:: measure3D_reference_frame and the possible values.

Hello @Myzhar ! Thank you for your response! Unfortunately, due to the fact that the rest of our robot operates under an entirely seprate global localization system, we need to be able to convert all values into the ‘true’ global reference frame of our world as observed by our other vision/odometery systems. As a result, using the ZED’s internal Visual+Inertial localization runs the risk of the ZED’s “global” reference frame drifting out of sync with the true “global” reference frame. Is there a way we could per from the compensations that the ZED uses for its global reference frame velocitys ourselves? These are also performed on different devices so we would prefer to avoid the latency of sending our true global pose to the Jetson so we can use it as the ZED’s global pose through the SDK if possible.

Apologies for the weirdness of our request, such are the problems with integrating new tech into legacy architectures.

Hi @Roman
you can use the TF to convert the object coordinates from the camera to the world frame.