Why is the point cloud data I got like this

Why is the point cloud data I got like this, and the color values always overflow, how to solve it?Below is my python code:

import pyzed.sl as sl
import pptk
import cv2
import open3d as o3d
import numpy as np

camera = sl.Camera()

init_params = sl.InitParameters()
init_params.camera_resolution = sl.RESOLUTION.HD720
init_params.coordinate_units = sl.UNIT.METER
init_params.depth_maximum_distance = 5.0
init_params.depth_minimum_distance = 0.2
init_params.depth_mode = sl.DEPTH_MODE.NEURAL
init_params.camera_fps = 30

# 打开相机

err = camera.open(init_params)
if err != sl.ERROR_CODE.SUCCESS:
camera.close()
exit(1)
image_size = camera.get_camera_information().camera_resolution
image_size.width = image_size.width / 2
image_size.height = image_size.height / 2
runtime_parameters = sl.RuntimeParameters()
runtime_parameters.sensing_mode = sl.SENSING_MODE.STANDARD
runtime_parameters.confidence_threshold = 50
runtime_parameters.texture_confidence_threshold = 100
runtime_parameters.sensing_mode = sl.SENSING_MODE.FILL
mat = sl.Mat(image_size.width, image_size.height, sl.MAT_TYPE.F32_C4)

if camera.grab(runtime_parameters) == sl.ERROR_CODE.SUCCESS:
camera.retrieve_measure(mat, sl.MEASURE.XYZABGR, sl.MEM.CPU, image_size)
point_cloud = mat.get_data()
point_cloud = np.array(point_cloud, dtype=np.float32)
v = pptk.viewer(point_cloud)
camera.close()

The point cloud looks like this:


I also have a question, is there any difference between using Python language and C++ language to operate zed cameras, and is Python limited

help me!!!please:sob:

@zore017 Hi, please don’t double-post.

  • What version of the SDK are you using? I think SENSING_MODE has been removed for some time now.
  • What are you filming? Flat uniform surfaces are bad for depth detection.
  • Please format your code correctly, it’s difficult to read as it is.

The core SDK is in C++, and runs at the same speed wether you use the Python wrapper or not. With our Python implementation, you can do everything you can do in C++. Just note that the multithreading is basically virtual, however (on the Python side, the core SDK makes full use of it underneath).

Edit: Are you able to run the samples correctly? → https://github.com/stereolabs/zed-sdk/tree/master/depth%20sensing/depth%20sensing

Thank you for your reply!
The SDK version I use is 3.8.2
And you are right, I was shooting a flat surface, I thought it would be better to detect it this way, but I was wrong.
Below is my commented code:

# Import necessary libraries
import pyzed.sl as sl  # ZED SDK for python
import pptk  # For visualizing point clouds
import cv2  # For image processing
import open3d as o3d  # For 3D operations
import numpy as np  # For numerical operations

# Initialize the ZED Camera
camera = sl.Camera()

# Set camera parameters
init_params = sl.InitParameters()
init_params.camera_resolution = sl.RESOLUTION.HD720  # Set the resolution of the camera to HD720
init_params.coordinate_units = sl.UNIT.METER  # Set the unit of measurement to meters
init_params.depth_maximum_distance = 5.0  # Set the maximum distance for depth perception to 5.0 meters
init_params.depth_minimum_distance = 0.2  # Set the minimum distance for depth perception to 0.2 meters
init_params.depth_mode = sl.DEPTH_MODE.NEURAL  # Set the mode of depth perception to 'NEURAL'
init_params.camera_fps = 30  # Set the frames per second of the camera to 30

# Open the camera
# If there's an error in opening the camera, close the camera and exit
err = camera.open(init_params)
if err != sl.ERROR_CODE.SUCCESS:
    camera.close()
    exit(1)

# Retrieve the camera resolution and halve the width and height
image_size = camera.get_camera_information().camera_resolution
image_size.width = image_size.width / 2  # Halve the width
image_size.height = image_size.height / 2  # Halve the height

# Set runtime parameters for the camera
runtime_parameters = sl.RuntimeParameters()
runtime_parameters.sensing_mode = sl.SENSING_MODE.STANDARD  # Set the sensing mode to 'STANDARD'
runtime_parameters.confidence_threshold = 50  # Set the confidence threshold to 50
runtime_parameters.texture_confidence_threshold = 100  # Set the texture confidence threshold to 100
runtime_parameters.sensing_mode = sl.SENSING_MODE.FILL  # Set the sensing mode to 'FILL' (overwrites the previous 'STANDARD' mode)

# Initialize a matrix to store the camera data
mat = sl.Mat(image_size.width, image_size.height, sl.MAT_TYPE.F32_C4)

# If the camera is able to grab an image
if camera.grab(runtime_parameters) == sl.ERROR_CODE.SUCCESS:
    # Retrieve the image data and store it in 'mat'
    camera.retrieve_measure(mat, sl.MEASURE.XYZABGR, sl.MEM.CPU, image_size)

    # Get the data from 'mat' and store it in 'point_cloud'
    point_cloud = mat.get_data()

    # Convert the data in 'point_cloud' to a numpy array of type float32
    point_cloud = np.array(point_cloud, dtype=np.float32)

    # Visualize the point cloud with pptk viewer
    v = pptk.viewer(point_cloud)

# Close the camera
camera.close()

Below is a rendering of the running depth_sensing, which I feel is very small.

Hi,
Can you send the original image ?
Also, did you try to launch the integrated samples on your side to understand the issue ? They are available here GitHub - stereolabs/zed-sdk: ⚡️The spatial perception framework for rapidly building smart robots and spaces (match your SDK version if needed) or directly in your SDK installation. The depth sensing might help you figure out what is happening.

Thank you for your reply!
Below is the original image I took, with depth_sening renderings, probably from different angles:


Thank you for your reply again!

I’m a bit confused, what do you expect with the rendering?
It seems to me that the whole image is rendered in the point cloud.

If the scale is small, maybe there is a difference between your renderer’s unit and the one you use in your code (METER it seems). Like, maybe your rendering is in centimeters or millimeters, so the rendering is a 1:100 or 1:10 scale.

Thank you for your reply, my ultimate goal is to get the elevation data of the water surface, please ask if your company has done relevant experiments and how effective it is.

We would need more details on what you want to achieve exactly. Do you get bad results when trying on your side?

My goal was to photograph the surface of the water and get the elevation data of the waves on the surface without being very dense. I haven’t experimented with the surface of the water yet, but one thing is clear: the camera has a small shooting range, and the reflection of the water surface has a big impact。

And I wondered if Ross and Yolo would help my goals,or some other tool。

I don’t recall customers doing precisely this, so the best way to know the possibilities would be to test the setup.
Indeed, the reflections on the water’s surface will probably have a noticeable impact.

The accuracy of the depth itself will go down the further the camera is from the target, the depth accuracy you can expect is on the datasheet of your camera (< 1% up to 3m and < 5% up to 15m for the ZED 2i for instance).

If I’m not mistaken, ROS is a robotic framework and YOLO specializes in object detection, they won’t help. You can try different angles, resolutions, and depth modes on the camera (ULTRA vs. NEURAL mostly). Maybe the exposure settings could be adjusted too.

I’ll also mention that recording some SVOs in different framerates and resolutions should allow you to test the same way as with real-time tests. You don’t have to modify the parameters on the field to see the results, I would just ensure recording a broad set of setups.