Bug using Plane Detector service

Hello, I have been testing the Plane Detector service and using the clicked point option from RVIZ wasn’t giving any good results such as:

[ INFO] [1707996898.713487882]: Clicked 3D point [X FW, Y LF, Z UP]: [1.57332,0.297732,0.614705]
[ INFO] [1707996898.713780693]: ‘map’ → ‘zed2i_left_camera_optical_frame’: {0.114,0.021,0.014} {-0.452,0.502,-0.503,-0.539}
[ INFO] [1707996898.713935418]: Point in camera coordinates [Z FW, X RG, Y DW]: {-0.252,-0.425,1.629}
[ INFO] [1707996898.714120225]: Clicked point image coordinates: [inf,inf]

So I tried to publish points using the topic itself, but still, didn’t know why the Clicked point image coordinates were inf values. After digging in the calculation in the nodelet, I have seen that this seems to happen due to this line in zed_wrapper_nodelet.cpp

float out_scale_factor = mMatResol.width / mCamWidth;

I created some couts in order to check the variables and this is the result:

[mMatResol.width mCamWidth out_scale_factor]: [640, 1280, 0]

Is this supposed to happen in this C++ version?? I had to manually make both variables float in the operation to see the result correctly:

float out_scale_factor = float(mMatResol.width) / float(mCamWidth);

Resulting in:

[mMatResol.width mCamWidth out_scale_factor]: [640, 1280, 0.5]

Is this normal? Should I post a Github issue for this?

Hi @AngeLoGa
I’m going to check and eventually fix the issue.

Alright, looking forward to it.

Hi @AngeLoGa
I just checked and it works as expected with the default parameters:

[zed_wrapper-3] [INFO] [1708080906.262290803] [zed.zed_node]: Clicked 3D point [X FW, Y LF, Z UP]: [1.90397,0.741856,0.833486]
[zed_wrapper-3] [INFO] [1708080906.262346242] [zed.zed_node]: 'map' -> 'zed_left_camera_optical_frame': {0.059,0.018,0.010} {0.504,-0.482,0.518,0.495}
[zed_wrapper-3] [INFO] [1708080906.262362032] [zed.zed_node]: Point in camera coordinates [Z FW, X RG, Y DW]: {-0.646,-0.797,1.936}
[zed_wrapper-3] [INFO] [1708080940.946308630] [zed.zed_node]: Clicked 3D point [X FW, Y LF, Z UP]: [2.15523,0.20853,0.917386]
[zed_wrapper-3] [INFO] [1708080940.946354648] [zed.zed_node]: 'map' -> 'zed_left_camera_optical_frame': {0.059,0.018,0.009} {0.504,-0.482,0.518,0.495}
[zed_wrapper-3] [INFO] [1708080940.946363492] [zed.zed_node]: Point in camera coordinates [Z FW, X RG, Y DW]: {-0.110,-0.849,2.189}

image

Have you changed something? Resolution? Publish resolution?

Hi, just wanted to check the performance of the algorithm, it is all default. I will attach both common.yaml and zed2i.yaml in order to see if something is not set correctly.

I’m using Ubuntu 20.04 Noetic ROS1 repo.

Hi @AngeLoGa
OK, I just checked the ROS 2 node behavior because you did not specify what ROS version are you using.

I’m going to replicate the test also with ROS, but I recommend migrating to ROS 2 for a better experience.

Indeed, I aggree with you, but sadly, we had some development in ROS1 and we can’t just pass to ROS2 at first glance.

Hello @Myzhar, any updates on this?

I forgot to attach the config files, please have a look on them, most of it should be without modifications:
common.yaml

# params/common.yaml
# Common parameters to Stereolabs ZED and ZED mini cameras
---

# Dynamic parameters cannot have a namespace
brightness:                     4                               # Dynamic
contrast:                       4                               # Dynamic
hue:                            0                               # Dynamic
saturation:                     4                               # Dynamic
sharpness:                      4                               # Dynamic
gamma:                          8                               # Dynamic - Requires SDK >=v3.1
auto_exposure_gain:             true                            # Dynamic
gain:                           100                             # Dynamic - works only if `auto_exposure_gain` is false
exposure:                       100                             # Dynamic - works only if `auto_exposure_gain` is false
auto_whitebalance:              true                            # Dynamic
whitebalance_temperature:       42                              # Dynamic - works only if `auto_whitebalance` is false
depth_confidence:               50                              # Dynamic
depth_texture_conf:             100                             # Dynamic
point_cloud_freq:               10.0                            # Dynamic - frequency of the pointcloud publishing (equal or less to `grab_frame_rate` value)

general:
    camera_name:                zed                             # A name for the camera (can be different from camera model and node name and can be overwritten by the launch file)
    zed_id:                     0
    serial_number:              0
    gpu_id:                     -1
    base_frame:                 'base_link'                     # must be equal to the frame_id used in the URDF file
    sdk_verbose:                1                               # Set verbose level of the ZED SDK
    svo_compression:            2                               # `0`: LOSSLESS, `1`: AVCHD, `2`: HEVC
    self_calib:                 true                            # enable/disable self calibration at starting
    camera_flip:                'AUTO'                          # camera flip mode: 'OFF', 'ON', 'AUTO'
    pub_resolution:             'CUSTOM'                        # The resolution used for output. 'NATIVE' to use the same `general.grab_resolution` - `CUSTOM` to apply the `general.pub_downscale_factor` downscale factory to reduce bandwidth in transmission
    pub_downscale_factor:       2.0                             # rescale factor used to rescale image before publishing when 'pub_resolution' is 'CUSTOM'
    pub_frame_rate:             15.0                            # frequency of publishing of video and depth data (see SDK API "InitParameters::grab_compute_capping_fps")
    svo_realtime:               true                            # if true the input SVO will be played trying to respect the original framerate eventually skipping frames, otherwise every frame will be processed respecting the `pub_frame_rate` setting
    region_of_interest:         '[]'                            # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
    #region_of_interest:        '[[0.25,0.33],[0.75,0.33],[0.75,0.5],[0.5,0.75],[0.25,0.5]]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
    #region_of_interest:        '[[0.25,0.25],[0.75,0.25],[0.75,0.75],[0.25,0.75]]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
    #region_of_interest:        '[[0.5,0.25],[0.75,0.5],[0.5,0.75],[0.25,0.5]]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.

#video:

depth:
    depth_mode:                 'ULTRA'                         # 'NONE', 'PERFORMANCE', 'QUALITY', 'ULTRA', 'NEURAL'
    depth_stabilization:        1                               # [0-100] - 0: Disabled
    openni_depth_mode:          false                           # 'false': 32bit float meter units, 'true': 16bit uchar millimeter units

pos_tracking:
    pos_tracking_enabled:       true                            # True to enable positional tracking from start
    pos_tracking_mode:          'STANDARD'                      # Matches the ZED SDK setting: 'QUALITY', 'STANDARD' -> Use 'QUALITY' with 'ULTRA' depth mode for improved outdoors performance
    set_gravity_as_origin:      true                            # If 'true' align the positional tracking world to imu gravity measurement. Keep the yaw from the user initial pose.
    imu_fusion:                 true                            # enable/disable IMU fusion. When set to false, only the optical odometry will be used.
    publish_tf:                 true                            # publish `odom -> base_link` TF
    publish_map_tf:             true                            # publish `map -> odom` TF
    map_frame:                  'map'                           # main frame
    odometry_frame:             'odom'                          # odometry frame
    area_memory_db_path:        ''                              # file loaded when the node starts to restore the "known visual features" map. 
    save_area_memory_db_on_exit: false                          # save the "known visual features" map when the node is correctly closed to the path indicated by `area_memory_db_path`
    area_memory:                true                            # Enable to detect loop closure
    floor_alignment:            false                           # Enable to automatically calculate camera/floor offset
    initial_base_pose:          [0.0,0.0,0.0, 0.0,0.0,0.0]      # Initial position of the `base_frame` -> [X, Y, Z, R, P, Y]
    init_odom_with_first_valid_pose: true                       # Enable to initialize the odometry with the first valid pose
    path_pub_rate:              2.0                             # Camera trajectory publishing frequency
    path_max_count:             -1                              # use '-1' for unlimited path size
    two_d_mode:                 false                           # Force navigation on a plane. If true the Z value will be fixed to "fixed_z_value", roll and pitch to zero
    fixed_z_value:              0.00                            # Value to be used for Z coordinate if `two_d_mode` is true
    depth_min_range:            0.0                             # Set this value for removing fixed zones of the robot in the FoV of the camerafrom the visual odometry evaluation
    set_as_static:              false                           # If 'true' the camera will be static and not move in the environment

mapping:
    mapping_enabled:            false                           # True to enable mapping and fused point cloud publication
    resolution:                 0.05                            # maps resolution in meters [0.01f, 0.2f]
    max_mapping_range:          -1                              # maximum depth range while mapping in meters (-1 for automatic calculation) [2.0, 20.0]
    fused_pointcloud_freq:      1.0                             # frequency of the publishing of the fused colored point cloud
    clicked_point_topic:        '/clicked_point'                # Topic published by Rviz when a point of the cloud is clicked. Used for plane detection

sensors:
    sensors_timestamp_sync:     false                           # Synchronize Sensors messages timestamp with latest received frame
    max_pub_rate:               400.                            # max frequency of publishing of sensors data. MAX: 400. - MIN: grab rate
    publish_imu_tf:             true                            # publish `IMU -> <cam_name>_left_camera_frame` TF

object_detection:
    od_enabled:                         false                           # True to enable Object Detection [not available for ZED]
    model:                              'MULTI_CLASS_BOX_ACCURATE'      # 'MULTI_CLASS_BOX_FAST', 'MULTI_CLASS_BOX_MEDIUM', 'MULTI_CLASS_BOX_ACCURATE', 'PERSON_HEAD_BOX_FAST', 'PERSON_HEAD_BOX_ACCURATE'
    max_range:                          15.                             # Maximum detection range
    allow_reduced_precision_inference:  true                            # Allow inference to run at a lower precision to improve runtime and memory usage
    prediction_timeout:                 0.5                             #  During this time [sec], the object will have OK state even if it is not detected. Set this parameter to 0 to disable SDK predictions            
    object_tracking_enabled:            true                            # Enable/disable the tracking of the detected objects
    mc_people:                          true                            # Enable/disable the detection of persons for 'MULTI_CLASS_BOX_X' models
    mc_vehicle:                         true                            # Enable/disable the detection of vehicles for 'MULTI_CLASS_BOX_X' models
    mc_bag:                             true                            # Enable/disable the detection of bags for 'MULTI_CLASS_BOX_X' models
    mc_animal:                          true                            # Enable/disable the detection of animals for 'MULTI_CLASS_BOX_X' models
    mc_electronics:                     true                            # Enable/disable the detection of electronic devices for 'MULTI_CLASS_BOX_X' models
    mc_fruit_vegetable:                 true                            # Enable/disable the detection of fruits and vegetables for 'MULTI_CLASS_BOX_X' models
    mc_sport:                           true                            # Enable/disable the detection of sport-related objects for 'MULTI_CLASS_BOX_X' models

Zed2i.yaml

# params/zed2i.yaml
# Parameters for Stereolabs ZED 2i camera
---

general:
    camera_model:               'zed2i'
    grab_resolution:            'HD720'     # 'HD2K', 'HD1080', 'HD720', 'VGA', 'AUTO'
    grab_frame_rate:            30          # Frequency of frame grabbing for internal SDK operations

depth:
    min_depth:                  0.3         # Min: 0.3, Max: 3.0 - Default 0.7 - Note: reducing this value wil require more computational power and GPU memory
    max_depth:                  15.0        # Max: 40.0

Hi @AngeLoGa
I’m sorry for the very late reply.
I just pushed a fix to the zed-ros-wrapper repository to solve this issue.
Please let me know if it works for you.

Hello @Myzhar, do not worry, it is now working as expected. Just wanted to know if there is any parameter that could improve in a larger way the plane detection performance. We wanted to tune it to see how much of the plane could detect. Thanks for the fix!

Some parameters allow you to improve the plane detection, but they are not supported by the ROS 1 Wrapper.

They are handled by the ZED ROS 2 Wrapper.

An upgrade of the ROS 1 Wrapper is not planned, but you can try to port the ROS 2 code to the ROS 1 wrapper if you need them.