About converting depth map to point cloud

Hi
I’m trying to convert a manually calculated depth map into a point cloud, and i know the ZED SDK and the Python API provide the point cloud without the need to compute it by calling the [retrieveMeasure function ] with the [MEASURE.XYZBGRA parameter ]. So i use it to obtain point cloud ,but when I use " recording/export" in ZED SDK examples to obtain left and depth image,and then use the following code to calculate point cloud, I found that the deviation in the x and y directions was very large, causing the obtained point cloud to be much larger than the real size.


I tried to directly output the point cloud view in the example: depth sensing. The original size of the image is 1280x720, but due to the following code,the size of the image become 720x405.The point cloud obtained in this way is closer to the real size.
// create a custom resolution to save ZED data
auto camera_config = zed.getCameraInformation().camera_configuration;
float image_aspect_ratio = camera_config.resolution.width / (1.f * camera_config.resolution.height);
int requested_low_res_w = min(720, (int)camera_config.resolution.width);
sl::Resolution res(requested_low_res_w, requested_low_res_w / image_aspect_ratio);

My question is: why does this happen? The depth values obtained at the two resolutions are the same, but there are large deviations in x and y.I really need to calculate it manually, thanks

Hi @lyehan
did you use the information from this support page?

Yes,i did.In fact, the code I use here is written based on the formula in the link

Hi @lyehan,

Can you please share how you are retrieving the calibration values from the camera?
Please make sure you are using the rectified parameters for the left camera in order to compute the point cloud correctly.

In the air I used the camera out of the calibration parameters, in the water I used the OpenCV recalibration parameters, the focal ratio is also close to 1.33. Specifically,I use ‘InitParameters’ first to import the calibration file in water and use 'zed.retrieveImage(left_image, VIEW::LEFT) and zed.retrieveMeasure(depth_image, MEASURE::DEPTH) get the left image and the depth image, and then use the following formula to calculate the point cloud
image
I would like to know if this can get a corrected left image(polar alignment, dedistortion) , which I currently use as if it had been corrected.Thanks

Hi @lyehan,

After ingesting the opencv calibration parameters into the SDK, are you retrieving the rectified parameters using the API: CameraConfiguration Struct Reference | API Reference | Stereolabs

These are the calibration_parameters in the CameraConfiguration object.

Do you mean to say that I may not have successfully imported the file calibrated in water, but the left and right eye images I get look undistorted and polar aligned. And here’s how I imported it

I tried looking at the left camera internal argument, but I found that whether I set the camera_disable_self_calib InitParameter to false or true, the result is as shown in the image below

图片

But in fact, the parameters in the zed_calibration_use.yml file, which I recalibrated in water, are shown in the figure below, and I wonder why. Thanks!

The parameters that are initially calibrated are applied to the uncorrected image, and the corrected image parameters are changed.
Correspond respectively to:

#Parameters applicable to uncalibrated images (factory camera parameters or manually calibrated camera parameters
sl::CameraParameters left_cam_params_raw = zed.getCameraInformation().camera_configuration.calibration_parameters_raw.left_cam
#Parameters applicable to rectified images 
sl::CameraParameters left_cam_params = zed.getCameraInformation().camera_configuration.calibration_parameters.left_cam

Hi @lyehan,

Yes we distinguish both sets of parameters as you’ve found, glad to hear this solved your issue.