Stereo PnP (Perspective-n-Point)

Hi, I’m wondering if anyone has used the stereo aspect of Zed Cameras to improve PnP (Perspective-n-Point) pose estimation of known objects in view. See OpenCV reference here: OpenCV: Perspective-n-Point (PnP) pose computation

Basically, with a single (mono) camera, you can find the 3D pose of an object by detecting the 2D pixel locations of some known points on the object (usually the 4 corners of a 2D printed square marker of known size), and use the camera’s intrinsic calibration matrix and the corresponding 3D object points to estimate the pose of the object (estimate the 3D pose that creates the observed 2D projection with minimal re-projection error). See cv::solvePnP (OpenCV: Camera Calibration and 3D Reconstruction)

However, I need extremely precise (< 1mm error) pose estimates at close range (1-2.5 feet away from camera), and am wondering if Zed has an api or tool for merging the PnP results from the left and right camera to get more precise pose estimates, or if anyone else knows of other methods of fusing PnP results from multiple/stereo cameras together for better accuracy/precision?

Thanks

1 Like