Hello ZED Community,
I am currently integrating a ZED camera with NVIDIA’s Isaac ROS Visual SLAM (VSLAM) on a Jetson AGX Orin platform. As VSLAM requires grayscale image input, I am exploring the most efficient method to acquire these images.
Options Considered:
- Subscribing to Native Grayscale Topics: The ZED ROS 2 wrapper publishes rectified grayscale images on topics such as
/zed/zed_node/left/image_rect_gray
and/zed/zed_node/right/image_rect_gray
. Subscribing directly to these topics leverages the camera’s internal processing capabilities, potentially minimizing additional computational overhead. - Utilizing NVIDIA’s Image Format Converter Nodes: Alternatively, NVIDIA’s
isaac_ros_image_proc
package offers GPU-accelerated nodes for image processing tasks, including format conversion. By subscribing to the color image topics (e.g.,/zed/zed_node/left/image_rect_color
), these nodes can convert the images to the desired grayscale format, utilizing the AGX Orin’s GPU resources.
Specific Questions:
-
Grayscale Image Generation: How are the grayscale images on the native topics generated? Are they produced directly by the ZED camera’s onboard processing, or does the ZED ROS 2 wrapper perform the conversion from color to grayscale?
-
Performance Considerations: Which method is more efficient on the Jetson AGX Orin platform? Is it better to utilize the native grayscale topics, potentially reducing CPU load, or to employ NVIDIA’s GPU-accelerated image format converter nodes for grayscale conversion? Are there any benchmarks or experiences related to CPU/GPU utilization and latency for these approaches?
I appreciate any insights or experiences you can share regarding the optimal approach for acquiring grayscale images for VSLAM on the Jetson AGX Orin.
Thank you!