I am working on a solution to visualize 3D position coordinates of objects that I detected and saved in an earlier run.
Specifically, after a run/recording, I save the .area map file and also the detections and I would like to visualize them when I visit the same area later with the camera.
However, the saved positions are with respect to the world coordinate frame.
How could I transform these coordinates to be able to visualize the bounding boxes on my current image, i.e. have the 3d positions of objects wrt the world frame be converted to be wrt the camera frame?
I figured it out.
// get camera pose wrt world frame
auto cam_to_world = cam_pose;
// current_pos is the position of an object wrt world frame, new_pos is wrt to camera frame
sl::Translation new_pos = current_pos * cam_to_world.getOrientation() + cam_to_world.getTranslation();
I got same problem of you.
I have a question.
- How do you get cam_pose, current_pose?
I get the cam_pose with respect to the world frame using this function:
And I get the current_pos, which is the position of an object with respect to world frame, using the object detection module of the ZED 2 camera:
current_pos = objects.position;
I highly recommend you to visit the github page of stereolabs for more information. Look at the positional tracking and object detection examples.
@nyakasko I am trying to ensure that detected objects’ positions and measurements are saved with reference to a world frame. However, setting the following parameters
`runtime_params = sl.RuntimeParameters()
runtime_params.measure3D_reference_frame = sl.REFERENCE_FRAME.WORLD
positional_tracking_params = sl.PositionalTrackingParameters()
position = objects.position`
has still returned the position relative to the camera reference frame, not the world frame. Is there a parameter that I have missed or did I place them in the wrong place?