Hi, I need to create a Qualisys calibration file. For this, I need the rotation of the first camera relative to the world origin. In this setup, the world origin is located about 1.7 meters below the first camera (camera0 position = [0, -1.7, 0]).
Zed360 reports the first camera’s rotation as (0, 0, 0), but visually it clearly looks tilted downward by about 30 degrees.
How can I calculate the actual rotation of all cameras in the world coordinate system?
I believe I need to use the IMU data for this. If so, how should I apply the IMU values, considering that the IMU and the calibration file seem to use different coordinate axes?
Hi @yvzGrsy55 ,
Welcome in the forum community !
ZED360 tool calibrates each camera orientation and position in the first camera frame. As you said, to get all of these orientations and positions in the world frame you just have to define a transformation from first camera frame to world frame. To do so, you can use the imu data we provide using ZED-SDK API, compute the angles using the quaternions and then create a transfomation matrix as you also know the translation between world frame and first camera frame.
About the use of IMU, for example, you can decide that the Z axe of world frame has the same direction as gravity vector, and so the tilt you mentionned will be represented in the transformation matrix from camera frame to world frame.
I hope this will help you !
Best regards,
1 Like
Thank you very much for the explanation.
If all rotations in the calibration file are relative to camera0, what about the translation vectors?
For example, camera0 has a translation of [0, -1.7, 0]
, which seems to be defined in world coordinates.
-
Does this mean that translations are in the world frame, while rotations are relative to camera0?
-
Or are both translations and rotations relative to camera0, and the translation of camera0 just represents its offset from the world origin?
You also mentioned that I can decide to align the Z axis of the world frame with the gravity vector provided by the IMU, and that the tilt of the camera would then be reflected in the transformation from the camera frame to the world frame.
However, I’m still a bit unclear on how exactly to use the IMU for this transformation.
-
Should I take the IMU quaternion as the rotation from camera0 to world frame and convert it directly into a rotation matrix?
-
Or do I need to account for any difference between the IMU coordinate system and the camera coordinate system before using it?
-
Once I have this rotation matrix, should I multiply it with each camera’s relative rotation matrix and also rotate their translation vectors accordingly?
I’d really appreciate a bit more clarification on this part.
Thanks again for your support!
@hbeaumont I couldn’t understand this
pls help
Hi @sauceMaster34 ,
We can confirm that there are conversions done internally in the SDK that are made by making some assumptions on the system.
We are looking into this internally to provide a clearer way to share the camera poses and update them before providing to the Fusion API.
Best regards,