ROS2 wrapper - topics optimization

Hi there,

I am working on a project that requires connecting a Jetson Orin Nano 8GB to a PC via Wi-Fi (ZED2i connected to the Jetson). I’m using ROS2 to read ZED ROS2 wrapper topics from the Jetson on the PC. However, when I subscribe to these image topics via RVIZ2, the latency increases from 66 ms to 3000 ms. I’ve discovered that the Wi-Fi bandwidth becomes completely saturated.

Given this issue, I’m looking for the best approach to optimize this connection, as I only need Depth and RGB image topics along with their info topics. My first idea was to eliminate the topics from the gray and raw images, as well as the point cloud one, but don’t know exactly how and how it would affect.

Thanks in advance for your help!

Hi @Jimi1811

This is not required. Unsubscribed topics do not consume bandwidth because they are only advertised, not published when there are no subscribers.

You can reduce the required bandwidth by reducing the publishing frequency (general.pub_frame_rate), and/or reducing the data size by setting the value of general.pub_resolution to “CUSTOM” and rising the value of general.pub_downscale_factor.

I recommend you also read these guides to improve the performance of the data distribution middleware:

Hello @Myzhar,

Thank you for your response. Currently, I am fine-tuning the DDS parameters as the settings in the .yaml file you mentioned are already configured.

I am quite confused regarding the topics. I am using ISAAC ROS packages to obtain the camera pose and a 3D mesh with a cost map. Therefore, I do not need the pose with VSLAM or the point cloud from the ZED wrapper. Given that these topics are not being utilized, I assume there is no need to publish them. However, when I visualize these topics in RViz, they still appear. Does this mean that even without subscribers, the ZED wrapper is utilizing the Jetson’s graphics and RAM to process the pose and point cloud, and they only appear when someone subscribes to or visualizes them?

You can disable the Positional Tracking modules by setting:
depth.depth_stabilization → 0
pos_tracking.pos_tracking_enabled → false

Hello @Myzhar. Thanks for the parameters. Unfortunately I have the same result. I still can see the outputs in RViz. Both mapping_enabled and pos_tracking_enabled are false. Here is the entire file:

# config/common_yaml
# Common parameters to Stereolabs ZED and ZED mini cameras

---
/**:
    ros__parameters:

        use_sim_time: false # Set to `true` only if there is a publisher for the simulated clock to the `/clock` topic. Normally used in simulation mode.

        simulation:
            sim_enabled: false # Set to `true` to enable the simulation mode and connect to a simulation server
            sim_address: '127.0.0.1' # The connection address of the simulation server. See the documentation of the supported simulation plugins for more information.
            sim_port: 30000 # The connection port of the simulation server. See the documentation of the supported simulation plugins for more information.

        general:
            camera_model: 'zed2i'
            camera_name: 'zed2i' # usually overwritten by launch file
            grab_resolution: 'HD720' # The native camera grab resolution. 'HD2K', 'HD1080', 'HD720', 'VGA', 'AUTO'
            grab_frame_rate: 30 # ZED SDK internal grabbing rate

            svo_file: 'live' # usually overwritten by launch file
            svo_loop: false # Enable loop mode when using an SVO as input source
            svo_realtime: true # if true the SVO will be played trying to respect the original framerate eventually skipping frames, otherwise every frame will be processed respecting the `pub_frame_rate` setting
            camera_timeout_sec: 5
            camera_max_reconnect: 5
            camera_flip: false
            serial_number: 0 # usually overwritten by launch file
            pub_resolution: 'NATIVE'  # antes CUSTOM # The resolution used for output. 'NATIVE' to use the same `general.grab_resolution` - `CUSTOM` to apply the `general.pub_downscale_factor` downscale factory to reduce bandwidth in transmission
            pub_downscale_factor: 2.0 # rescale factor used to rescale image before publishing when 'pub_resolution' is 'CUSTOM'
            pub_frame_rate: 30.0 # frequency of publishing of visual images and depth images
            gpu_id: -1
            optional_opencv_calibration_file: '' # Optional path where the ZED SDK can find a file containing the calibration information of the camera computed by OpenCV. Read the ZED SDK documentation for more information: https://www.stereolabs.com/docs/api/structsl_1_1InitParameters.html#a9eab2753374ef3baec1d31960859ba19 
            region_of_interest: '[]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
            #region_of_interest: '[[0.25,0.33],[0.75,0.33],[0.75,0.5],[0.5,0.75],[0.25,0.5]]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
            #region_of_interest: '[[0.25,0.25],[0.75,0.25],[0.75,0.75],[0.25,0.75]]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
            #region_of_interest: '[[0.5,0.25],[0.75,0.5],[0.5,0.75],[0.25,0.5]]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.

        video:
            brightness: 4 # [DYNAMIC] Not available for ZED X/ZED X Mini
            contrast: 4 # [DYNAMIC] Not available for ZED X/ZED X Mini
            hue: 0 # [DYNAMIC] Not available for ZED X/ZED X Mini
            saturation: 4 # [DYNAMIC]
            sharpness: 4 # [DYNAMIC]
            gamma: 8 # [DYNAMIC]
            auto_exposure_gain: true # [DYNAMIC]
            exposure: 80 # [DYNAMIC]
            gain: 80 # [DYNAMIC]
            auto_whitebalance: true # [DYNAMIC]
            whitebalance_temperature: 42 # [DYNAMIC] - [28,65] works only if `auto_whitebalance` is false
            qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
            qos_depth: 1 # Queue size if using KEEP_LAST
            qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT -
            qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE

        depth:
            min_depth: 0.5 # Min: 0.2, Max: 3.0
            max_depth: 10.0 # Max: 40.0

            depth_mode: 'ULTRA' # Matches the ZED SDK setting: 'NONE', 'PERFORMANCE', 'QUALITY', 'ULTRA', 'NEURAL' - Note: if 'NONE' all the modules that requires depth extraction are disabled by default (Pos. Tracking, Obj. Detection, Mapping, ...)
            depth_stabilization: 0 # Forces positional tracking to start if major than 0 - Range: [0,100]
            openni_depth_mode: false # 'false': 32bit float [meters], 'true': 16bit unsigned int [millimeters]
            point_cloud_freq: 15.0 # [DYNAMIC] - frequency of the pointcloud publishing (equal or less to `grab_frame_rate` value)
            depth_confidence: 10 # [DYNAMIC]
            depth_texture_conf: 100 # [DYNAMIC]
            remove_saturated_areas: true # [DYNAMIC]
            qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
            qos_depth: 1 # Queue size if using KEEP_LAST
            qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT -
            qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE 

        pos_tracking:
            pos_tracking_enabled: false # antes true # True to enable positional tracking from start
            pos_tracking_mode: 'STANDARD' # Matches the ZED SDK setting: 'QUALITY', 'STANDARD'
            imu_fusion: true # enable/disable IMU fusion. When set to false, only the optical odometry will be used.
            publish_tf: true # [usually overwritten by launch file] publish `odom -> camera_link` TF
            publish_map_tf: true # [usually overwritten by launch file] publish `map -> odom` TF
            base_frame: "base_link" # antes no habia, pero se agrego y no paso nada
            map_frame: 'map'
            odometry_frame: 'odom'
            area_memory_db_path: ''
            area_memory: true # Enable to detect loop closure
            depth_min_range: 0.0 # Set this value for removing fixed zones of the robot in the FoV of the camerafrom the visual odometry evaluation
            set_as_static: false # If 'true' the camera will be static and not move in the environment
            set_gravity_as_origin: true # If 'true' align the positional tracking world to imu gravity measurement. Keep the yaw from the user initial pose.
            floor_alignment: false # Enable to automatically calculate camera/floor offset
            initial_base_pose: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] # Initial position of the `camera_link` frame in the map -> [X, Y, Z, R, P, Y]
            init_odom_with_first_valid_pose: true # Enable to initialize the odometry with the first valid pose
            path_pub_rate: 2.0 # [DYNAMIC] - Camera trajectory publishing frequency
            path_max_count: -1 # use '-1' for unlimited path size
            two_d_mode: false # Force navigation on a plane. If true the Z value will be fixed to 'fixed_z_value', roll and pitch to zero
            fixed_z_value: 0.00 # Value to be used for Z coordinate if `two_d_mode` is true
            transform_time_offset: 0.0 # The value added to the timestamp of `map->odom` and `odom->camera_link` transform being generated
            qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
            qos_depth: 1 # Queue size if using KEEP_LAST
            qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT
            qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE

        gnss_fusion:
            gnss_fusion_enabled: false # fuse 'sensor_msg/NavSatFix' message information into pose data
            gnss_fix_topic: '/gps/fix' # Name of the GNSS topic of type NavSatFix to subscribe [Default: '/gps/fix']
            gnss_init_distance: 5.0 # The minimum distance traveled by the robot required to initilize the GNSS fusion [SDK <= v4.0.5]
            gnss_zero_altitude: false # Set to `true` to ignore GNSS altitude information
            gnss_frame: 'gnss_link' # [usually overwritten by launch file] The TF frame of the GNSS sensor
            publish_utm_tf: true # Publish `utm` -> `map` TF
            broadcast_utm_transform_as_parent_frame: false # if 'true' publish `utm` -> `map` TF, otherwise `map` -> `utm`
            enable_reinitialization: true # determines whether reinitialization should be performed between GNSS and VIO fusion when a significant disparity is detected between GNSS data and the current fusion data. It becomes particularly crucial during prolonged GNSS signal loss scenarios.
            enable_rolling_calibration: true # If this parameter is set to true, the fusion algorithm will used a rough VIO / GNSS calibration at first and then refine it. This allow you to quickly get a fused position.
            enable_translation_uncertainty_target: false # When this parameter is enabled (set to true), the calibration process between GNSS and VIO accounts for the uncertainty in the determined translation, thereby facilitating the calibration termination. The maximum allowable uncertainty is controlled by the 'target_translation_uncertainty' parameter.
            gnss_vio_reinit_threshold: 5 # determines the threshold for GNSS/VIO reinitialization. If the fused position deviates beyond out of the region defined by the product of the GNSS covariance and the gnss_vio_reinit_threshold, a reinitialization will be triggered.
            target_translation_uncertainty: 10e-2 # defines the target translation uncertainty at which the calibration process between GNSS and VIO concludes. By default, the threshold is set at 10 centimeters.
            target_yaw_uncertainty: 0.1 # defines the target yaw uncertainty at which the calibration process between GNSS and VIO concludes. The unit of this parameter is in radian. By default, the threshold is set at 0.1 radians.

        mapping:
            mapping_enabled: false # True to enable mapping and fused point cloud pubblication
            resolution: 0.05 # maps resolution in meters [min: 0.01f - max: 0.2f]
            max_mapping_range: 5.0 # maximum depth range while mapping in meters (-1 for automatic calculation) [2.0, 20.0]
            fused_pointcloud_freq: 0.0 # frequency of the publishing of the fused colored point cloud
            clicked_point_topic: '/clicked_point' # Topic published by Rviz when a point of the cloud is clicked. Used for plane detection
            pd_max_distance_threshold: 0.15 # Plane detection: controls the spread of plane by checking the position difference.
            pd_normal_similarity_threshold:  15.0 # Plane detection: controls the spread of plane by checking the angle difference.
            qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
            qos_depth: 1 # Queue size if using KEEP_LAST
            qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT -
            qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE

        sensors:
            publish_imu_tf: true # [usually overwritten by launch file] enable/disable the IMU TF broadcasting
            sensors_image_sync: false # Synchronize Sensors messages with latest published video/depth message
            sensors_pub_rate: 100. # frequency of publishing of sensors data. MAX: 400. - MIN: grab rate
            qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
            qos_depth: 1 # Queue size if using KEEP_LAST
            qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT -
            qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE

        object_detection:
            od_enabled: false # True to enable Object Detection
            model: 'MULTI_CLASS_BOX_MEDIUM' # 'MULTI_CLASS_BOX_FAST', 'MULTI_CLASS_BOX_MEDIUM', 'MULTI_CLASS_BOX_ACCURATE', 'PERSON_HEAD_BOX_FAST', 'PERSON_HEAD_BOX_ACCURATE'
            allow_reduced_precision_inference: true # Allow inference to run at a lower precision to improve runtime and memory usage
            max_range: 20.0 # [m] Defines a upper depth range for detections
            confidence_threshold: 50.0 # [DYNAMIC] - Minimum value of the detection confidence of an object [0,99]
            prediction_timeout: 0.5 # During this time [sec], the object will have OK state even if it is not detected. Set this parameter to 0 to disable SDK predictions            
            filtering_mode: 1 # '0': NONE - '1': NMS3D - '2': NMS3D_PER_CLASS
            mc_people: true # [DYNAMIC] - Enable/disable the detection of persons for 'MULTI_CLASS_X' models
            mc_vehicle: true # [DYNAMIC] - Enable/disable the detection of vehicles for 'MULTI_CLASS_X' models
            mc_bag: true # [DYNAMIC] - Enable/disable the detection of bags for 'MULTI_CLASS_X' models
            mc_animal: true # [DYNAMIC] - Enable/disable the detection of animals for 'MULTI_CLASS_X' models
            mc_electronics: true # [DYNAMIC] - Enable/disable the detection of electronic devices for 'MULTI_CLASS_X' models
            mc_fruit_vegetable: true # [DYNAMIC] - Enable/disable the detection of fruits and vegetables for 'MULTI_CLASS_X' models
            mc_sport: true # [DYNAMIC] - Enable/disable the detection of sport-related objects for 'MULTI_CLASS_X' models            
            qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
            qos_depth: 1 # Queue size if using KEEP_LAST
            qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT
            qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE
        
        body_tracking:
            bt_enabled: false # True to enable Body Tracking
            model: 'HUMAN_BODY_MEDIUM' # 'HUMAN_BODY_FAST', 'HUMAN_BODY_MEDIUM', 'HUMAN_BODY_ACCURATE'
            body_format: 'BODY_38' # 'BODY_18','BODY_34','BODY_38','BODY_70'
            allow_reduced_precision_inference: false # Allow inference to run at a lower precision to improve runtime and memory usage
            max_range: 20.0 # [m] Defines a upper depth range for detections
            body_kp_selection: 'FULL' # 'FULL', 'UPPER_BODY'
            enable_body_fitting: false # Defines if the body fitting will be applied
            enable_tracking: true # Defines if the object detection will track objects across images flow
            prediction_timeout_s: 0.5 # During this time [sec], the skeleton will have OK state even if it is not detected. Set this parameter to 0 to disable SDK predictions
            confidence_threshold: 50.0 # [DYNAMIC] - Minimum value of the detection confidence of skeleton key points [0,99]
            minimum_keypoints_threshold: 5 # [DYNAMIC] - Minimum number of skeleton key points to be detected for a valid skeleton
            qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
            qos_depth: 1 # Queue size if using KEEP_LAST
            qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT
            qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE

        advanced: # WARNING: do not modify unless you are confident of what you are doing
        # Reference documentation: https://man7.org/linux/man-pages/man7/sched.7.html
            thread_sched_policy: 'SCHED_BATCH' # 'SCHED_OTHER', 'SCHED_BATCH', 'SCHED_FIFO', 'SCHED_RR' - NOTE: 'SCHED_FIFO' and 'SCHED_RR' require 'sudo'
            thread_grab_priority: 50 # ONLY with 'SCHED_FIFO' and 'SCHED_RR' - [1 (LOW) z-> 99 (HIGH)] - NOTE: 'sudo' required
            thread_sensor_priority: 70 # ONLY with 'SCHED_FIFO' and 'SCHED_RR' - [1 (LOW) z-> 99 (HIGH)] - NOTE: 'sudo' required
            thread_pointcloud_priority: 60 # ONLY with 'SCHED_FIFO' and 'SCHED_RR' - [1 (LOW) z-> 99 (HIGH)] - NOTE: 'sudo' required

        debug:
            sdk_verbose: 1 # Set the verbose level of the ZED SDK 
            debug_common: false
            debug_sim: false
            debug_video_depth: false
            debug_camera_controls: false
            debug_point_cloud: false
            debug_positional_tracking: false
            debug_gnss: false
            debug_sensors: false
            debug_mapping : false
            debug_terrain_mapping : false
            debug_object_detection : false
            debug_body_tracking : false
            debug_advanced: false

Is there other way to not to create the pointcloud?

The point cloud is not crated if there are no subscribers to the point cloud topic.

I’m testing the node in order to provide you with the parameters to completely disable the positional tracking module.

You must also force the publish_tf parameter to false in the launch command because the launch file overrides the value set in common.yaml:
ros2 launch zed_wrapper zed_camera.launch.py camera_model:=zed2i publish_tf:=false