Point Cloud data and quries regarding the same

Hi Team,

I’m trying to capture the PointCloud data in .pcd format and I have few questions to clarify.

  1. The data appears to be displaced from its origin, requiring multiple zoom-outs to locate a point cloud with clear visibility. Is there any way to fix the origin properly and capture the data into a .pcd file to get accurate PointCloud data?

  2. The point cloud data is densely packed, making it challenging to discern objects clearly within the dataset. Is there any other settings/parameters that are available to make ZED2 camera capture the data more accurately?

  3. Depth information in the point cloud data seems to be lacking, resulting in an absence of clear depth details. As in case of calculating the depth for (x,y) pixel in the image only when the point_cloud_value.z is finite. Is there any other settings/parameters that can be modified to make a clear estimation of the depth?

  4. We’ve also detected a significant amount of noise within the point cloud data, complicating the understanding of the dataset. So, can we some how reduce this noise and capture proper data?

Can anyone help me with these minor complexities?

Hi @Yashas
can you please provide pictures and data showing all the problems that you reported because it’s not easy to discuss all the points of the lists without a reference?




And the reference image(slightly different from the time I captured the PointCloud data, it’s mostly only the objects).

The surfaces that you are acquiring have homogeneous colors with no texture.
This lack of visual features is challenging for pure stereo vision processing because there is not so much information to perform stereo matching.

The final result also depends a lot on the parameters that you are using.

Sincerely I do not understand this point. Can you please share the PCD file so we can analyze it?

What do you mean by “densely packed”? The point clouds that you shared show data exactly as they should be.

Are you extracting the point cloud from the depth map or are you using the point cloud that we provide?

This is because of the homogeneity of the color of the surfaces that you are acquiring.
Try to lower down the depth confidence threshold value to reject points with low confidence values.

So, Regarding “The data appears to be displaced from its origin, requiring multiple zoom-outs to locate a point cloud with clear visibility. Is there any way to fix the origin properly and capture the data into a .pcd file to get accurate PointCloud data?
Once you open the file you’ll be puzzled with the plot, i,e… find it hard to figure out from which angle and at what zoom does the plot makes sense. Here is an example file attached to the reference.
1695895497339.zip (2.9 MB)

Regarding “Are you extracting the point cloud from the depth map or are you using the point cloud that we provide?
No, I’m using the inbuilt function to get the point_cloud data: Below code for reference.

 // try to retrive the pointCloud data.
            zed.retrieveMeasure(point_cloud, MEASURE::XYZRGBA, MEM::CPU, res);

// Exporting the point cloud data as a .pcd file.
            pcl::PointCloud<pcl::PointXYZRGBA>::Ptr pclPointCloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
            pclPointCloud->width = point_cloud.getResolution().width;
            pclPointCloud->height = point_cloud.getResolution().height;
            pclPointCloud->points.resize(point_cloud.getResolution().width * point_cloud.getResolution().height);

            int pointCloud_width = point_cloud.getResolution().width;
            int pointCloud_height = point_cloud.getResolution().height;
            sl::float4 pointXYZRGBA;

            int index = 0;
            for (int y = 0; y < pointCloud_height; y++) {
                for (int x = 0; x < pointCloud_width; x++, index++) {
                    point_cloud.getValue(x, y, &pointXYZRGBA);
                    pclPointCloud->points[index].x = pointXYZRGBA.x;
                    pclPointCloud->points[index].y = pointXYZRGBA.y;
                    pclPointCloud->points[index].z = pointXYZRGBA.z;
                    pclPointCloud->points[index].rgba = pointXYZRGBA.w;
                }
            }

            pcl::PCDWriter writer;
            writer.write<pcl::PointXYZRGBA>(point_cloud_directory_name + "/" + to_string(timestamp) + ".pcd", *pclPointCloud);

Regarding “The point cloud data is densely packed, making it challenging to discern objects clearly within the dataset. Is there any other settings/parameters that are available to make ZED2 camera capture the data more accurately?
It is hard to distinguish the actual objects from the pointCloud data. The objects appear as if they have smooth edges but in reality they have edges… I suspected it to be because of the camera position. I just had to make sure, hence I dropped that question. Is it possible to get a clearly(at least recognize that there is an object with proper shape) distinguishable the pointcloud data? any parameters involved here?

This happens because you are not using the same coordinate system of your viewer.
You can set it in InitParameters when you open the camera:
https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1COORDINATE__SYSTEM.html

Did you enable the FILL mode?

Thanks for the quick reply, will try out the changes that you suggested. If I need any more information or If I get any more doubts, will post in the same thread.

No, it’s part of the RuntimeParameters:
https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1RuntimeParameters.html#a48bc439ee70a72e77958b2fa7b75299e
but it’s disabled by default, so it’s ok.

What depth mode are you using? You can try with NEURAL and set the confidence_threashold in InitPameters to 10 (or lower) to obtain a less noisy point cloud.