Optimizing ZED2 SDK for Stable 3D Point Clouds: Addressing NaN Values in Human Pose Captures

I am currently using the following code to obtain 3D data from pixel data using the ZED2 SDK. However, I am encountering an issue where the 3D data sometimes contains NaN values, especially when capturing human poses or other objects. How can I obtain more stable 3D point cloud data from the ZED2 SDK for accurate representation of human poses or other objects without encountering NaN values?

sl::float3 pix_pcl(int pix_x, int pix_y, sl::Mat &point_cloud)
    sl::float4 point_cloud_value;
    point_cloud.getValue(pix_x, pix_y, &point_cloud_value);
    if (std::isfinite(point_cloud_value.x) && std::isfinite(point_cloud_value.y) && std::isfinite(point_cloud_value.z))
        sl::float3 pcl;
        pcl.x = point_cloud_value.x;
        pcl.y = point_cloud_value.y;
        pcl.z = point_cloud_value.z;
        return pcl;
        sl::float3 pcl;
        pcl.x = 0.0;
        pcl.y = 0.0;
        pcl.z = 0.0;
        return pcl;

Hi @debanik123
welcome to the Stereolabs community.

It is expected that not all the pixels contain valid depth information. NaN and Inf values correspond to points that could not be correctly matched by the stereo processing, caused by reflections, flare/glares, occlusions, …

What depth mode are you using? NEURAL depth mode provides a denser depth map.

I am using Neural Depth from Zed2 sdk. Can you please tell me how to get the only point cloud world mid point accurate 3d data(sl::float3)?
Like human or custom object midpoint you are providing.
example → body.position

Hi @debanik123,

If I understand correctly, you are trying to retrieve the 3D position of the detected objects or bodies. Is that correct?

To do this you do not have to retrieve the data from the point cloud, it is available directly in the API.

Please take a look at this guide to retrieve the 3D position of an Object using ObjectData::position, and similarly, here is a guide for retrieving the body position with BodyData.position.