I’m trying to display the object detection box over both the left and right image in real time. However, the object data’s bounding box coordinates are only for the left zed camera. How can I get the coordinates for the right side. Is it a matter of simply adding a static integer to the left and right box coordinates?
If you need to express 3D coordinates in the right camera reference frame, you can convert the ones given in the left camera knowing the transformation between the left an right cameras. You can get this transformation using the SDK by calling
zed.get_camera_information().calibration_parameters.stereo_transform ( CalibrationParameters Struct Reference | API Reference | Stereolabs )
Then, knowing the 3D coordinates (x,y,z) in the right camera frame, you can also get 2D pixel coordinates (u,v) in the right image knowing the formulas
x = (u - cx) * z / fx y = (v - cy) * z / fy
where cx, cy, fx, fy are the camera intrinsic parameters of the right camera that you can access through
zed.get_camera_information().calibration_parameters.right_cam (doc : CalibrationParameters Struct Reference | API Reference | Stereolabs)
Hope this helps