I am wondering if anyone can perhaps give me some more insights into an issue that I have encountered lately with ZED2i camera.
So, I have been playing around with sensor fusion, where I’m projecting LiDAR data onto images obtained from ZED camera. I’ve found out if works fairly well if the two sensors point forward. However, when I rotated and flipped the camera, I’ve noticed that the fusion breaks even when you account for the additional rotation.
My question how does the ZED camera initialize the coordinate system? Right now, I am lead to believe that regardless of the way I position my camera (horizontally or vertically in my case), after I run zed_wrapper to record some data, the IMAGE_COORDINATE system is initialized in the same way for both cases (Coordinate Frames - Stereolabs). Basically, I am lead to believe that at the startup, the two coordinate systems are set identically and not in a way that one is the rotated version of another? Does the ZED camera have some “default position”, such as laying horizontally, so that I know where is “down” or “right” or “forward” from the camera point of view?
I hope that I made myself as easy to understand as possible, since I’ve had some difficulties understanding what might be cause of the problems for my project.