Hello, I am currently working on translating world coordinates corresponding to a chosen camera frame from a svo file. The final goal for my project is to produce something similar to a heat map on a single frame chosen from the svo file. The frame number is known and the “heat” data is matched with the positional data based on their timestamps. But I am having trouble with translating the origin of the world coordinates relative to the chosen frame camera coordinate. I am totally new to pyzed programming so I still am confused about how the coordinates system works. But to my understanding, there should be a matrix for the translation between world coordinates and camera coordinates. I just don’t know where I can find this matrix and how to translate the coordinates. Can anyone give me some hints on how to accomplish my goal? Or maybe are there any methods or translation ways for me to get world coordinates into image coordinates?
when the ZED SDK starts and the positional tracking module is started the default initial position is [0,0,0] with the origin placed in the center of the CMOS sensor of the left camera:
The orientation is instead provided by the gravity vector estimated by the internal IMU.
The reference frame is reflecting the setting value in
@Myzhar Much appreciated if you could elaborate more on the world frame and camera frame. For example, I used the positional tracking module to estimate camera positions and orientations with respect to a world or reference frame (https://github.com/stereolabs/zed-examples/blob/master/tutorials/tutorial%204%20-%20positional%20tracking/c/main.c). Before the estimation, I fixed the initial camera frame same as the world frame such as
initial_world_rotation=[0,0,0,1] (SL_PositionalTrackingParameters Struct Reference | API Reference | Stereolabs). For frame #1, I got
orientation=[0.6893,-0.0374,0.7213,0.0569]. Does it means that the camera has only been rotated to the new orientation with respect to the world frame?