What version of the SDK are you using? I think SENSING_MODE has been removed for some time now.
What are you filming? Flat uniform surfaces are bad for depth detection.
Please format your code correctly, it’s difficult to read as it is.
The core SDK is in C++, and runs at the same speed wether you use the Python wrapper or not. With our Python implementation, you can do everything you can do in C++. Just note that the multithreading is basically virtual, however (on the Python side, the core SDK makes full use of it underneath).
Thank you for your reply!
The SDK version I use is 3.8.2
And you are right, I was shooting a flat surface, I thought it would be better to detect it this way, but I was wrong.
Below is my commented code:
# Import necessary libraries
import pyzed.sl as sl # ZED SDK for python
import pptk # For visualizing point clouds
import cv2 # For image processing
import open3d as o3d # For 3D operations
import numpy as np # For numerical operations
# Initialize the ZED Camera
camera = sl.Camera()
# Set camera parameters
init_params = sl.InitParameters()
init_params.camera_resolution = sl.RESOLUTION.HD720 # Set the resolution of the camera to HD720
init_params.coordinate_units = sl.UNIT.METER # Set the unit of measurement to meters
init_params.depth_maximum_distance = 5.0 # Set the maximum distance for depth perception to 5.0 meters
init_params.depth_minimum_distance = 0.2 # Set the minimum distance for depth perception to 0.2 meters
init_params.depth_mode = sl.DEPTH_MODE.NEURAL # Set the mode of depth perception to 'NEURAL'
init_params.camera_fps = 30 # Set the frames per second of the camera to 30
# Open the camera
# If there's an error in opening the camera, close the camera and exit
err = camera.open(init_params)
if err != sl.ERROR_CODE.SUCCESS:
camera.close()
exit(1)
# Retrieve the camera resolution and halve the width and height
image_size = camera.get_camera_information().camera_resolution
image_size.width = image_size.width / 2 # Halve the width
image_size.height = image_size.height / 2 # Halve the height
# Set runtime parameters for the camera
runtime_parameters = sl.RuntimeParameters()
runtime_parameters.sensing_mode = sl.SENSING_MODE.STANDARD # Set the sensing mode to 'STANDARD'
runtime_parameters.confidence_threshold = 50 # Set the confidence threshold to 50
runtime_parameters.texture_confidence_threshold = 100 # Set the texture confidence threshold to 100
runtime_parameters.sensing_mode = sl.SENSING_MODE.FILL # Set the sensing mode to 'FILL' (overwrites the previous 'STANDARD' mode)
# Initialize a matrix to store the camera data
mat = sl.Mat(image_size.width, image_size.height, sl.MAT_TYPE.F32_C4)
# If the camera is able to grab an image
if camera.grab(runtime_parameters) == sl.ERROR_CODE.SUCCESS:
# Retrieve the image data and store it in 'mat'
camera.retrieve_measure(mat, sl.MEASURE.XYZABGR, sl.MEM.CPU, image_size)
# Get the data from 'mat' and store it in 'point_cloud'
point_cloud = mat.get_data()
# Convert the data in 'point_cloud' to a numpy array of type float32
point_cloud = np.array(point_cloud, dtype=np.float32)
# Visualize the point cloud with pptk viewer
v = pptk.viewer(point_cloud)
# Close the camera
camera.close()
Below is a rendering of the running depth_sensing, which I feel is very small.
I’m a bit confused, what do you expect with the rendering?
It seems to me that the whole image is rendered in the point cloud.
If the scale is small, maybe there is a difference between your renderer’s unit and the one you use in your code (METER it seems). Like, maybe your rendering is in centimeters or millimeters, so the rendering is a 1:100 or 1:10 scale.
Thank you for your reply, my ultimate goal is to get the elevation data of the water surface, please ask if your company has done relevant experiments and how effective it is.
My goal was to photograph the surface of the water and get the elevation data of the waves on the surface without being very dense. I haven’t experimented with the surface of the water yet, but one thing is clear: the camera has a small shooting range, and the reflection of the water surface has a big impact。
I don’t recall customers doing precisely this, so the best way to know the possibilities would be to test the setup.
Indeed, the reflections on the water’s surface will probably have a noticeable impact.
The accuracy of the depth itself will go down the further the camera is from the target, the depth accuracy you can expect is on the datasheet of your camera (< 1% up to 3m and < 5% up to 15m for the ZED 2i for instance).
If I’m not mistaken, ROS is a robotic framework and YOLO specializes in object detection, they won’t help. You can try different angles, resolutions, and depth modes on the camera (ULTRA vs. NEURAL mostly). Maybe the exposure settings could be adjusted too.
I’ll also mention that recording some SVOs in different framerates and resolutions should allow you to test the same way as with real-time tests. You don’t have to modify the parameters on the field to see the results, I would just ensure recording a broad set of setups.