Mulit device calibration

Hello. I’m trying to join two points clouds from two ZED2i devices. I need translations and rotation of the second ZED camera relative to the first one. Furthermore, I’ve tried doing stereo calibration from OpenCV, but I got poor results. Probably I need to specify proper camera matrix and distortion coefficients for both cameras. How can I get them?

I run algorithm by:

    double error = cv::stereoCalibrate(
            chessboardCornersWorldNestedForCV, slaveChessboardCornersList,
            masterChessboardCornersList,
            slaveCameraMatrix, slaveDistCoeff, masterCameraMatrix, masterDistCoeff, imageSize,
            R,  // output
            t,  // output
            cv::noArray(), cv::noArray(),
            cv::CALIB_FIX_INTRINSIC | cv::CALIB_RATIONAL_MODEL | cv::CALIB_CB_FAST_CHECK);

slaveCameraMatrix, masterCameraMatrix are just identity matrices
slaveDistCoeff, masterDistCoeff are vectors of floats, filled with zeros

Here is my result of joining two points clouds representing a chair:

1 Like

I don’t know the correct answer, but have you looked at:
https://www.stereolabs.com/docs/api/classsl_1_1Pose.html
https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1Pose.html

Hello, you can check calibration with aruco ? Tutorial is not working as well but it’s a track .

Can you tell how did you display both camera point cloud on the same viewer ?

you can check calibration with aruco ?

OK, I’ll check it (for now I have some strange problem with detecting ZED Camera using docker container).

Can you tell how did you display both camera point cloud on the same viewer ?

The image with two point clouds is created by joining points from two cameras into one 3D object. What’s more I transformed points from the second camera using results from cv::stereoCalibrate.

I’ve taken camera matrix, as in the example which you provided:

auto camCalib = zeds[i].zed.getCameraInformation()..camera_configuration.calibration_parameters.left_cam;
zeds[i].camera_matrix(0, 0) = camCalib.fx;
zeds[i].camera_matrix(1, 1) = camCalib.fy;
zeds[i].camera_matrix(0, 2) = camCalib.cx;
zeds[i].camera_matrix(1, 2) = camCalib.cy;

Results are much better!

Hi,

Compare to the code source , you add “camera_configuration” . In my case, I have still the same matrix and also bad results .

acuroData.marker_length = 0.15f; // Size real size of the maker in meter is by default but I changed to 0.16 due to the marker size .

Can you share a repo to discuss together about it .

Happy new year

I’m using chessboard calibration instead of aruco. Sorry I cannot share my code because it’s for commercial product, but I used examples provided for Microsoft Kinect: Azure-Kinect-Sensor-SDK/examples/green_screen at develop · microsoft/Azure-Kinect-Sensor-SDK · GitHub and this function: Azure-Kinect-Sensor-SDK/main.cpp at develop · microsoft/Azure-Kinect-Sensor-SDK · GitHub

Happy new year!