Hello everyone,
I am working with NVIDIA Jetson AGX Orin (JetPack 6.1, Ubuntu 22.04) and integrating the ZED ROS2 Wrapper with Isaac ROS. I am using a ZED X camera and want to optimize image retrieval for GPU-based processing.
By default, the ZED ROS2 Wrapper retrieves images into CPU memory (sl::MEM::CPU
), but since I am using Isaac ROS for GPU-accelerated processing, I would like to retrieve images directly into GPU memory (sl::MEM::GPU
) to eliminate unnecessary CPU-GPU transfers.
I have modified the retrieveImage
function in zed_wrapper_node.cpp
as follows:
if (mRgbSubCount + mLeftSubCount + mStereoSubCount > 0) {
retrieved |=
sl::ERROR_CODE::SUCCESS ==
mZed->retrieveImage(mMatLeft, sl::VIEW::LEFT, sl::MEM::GPU, mMatResol);
mSdkGrabTS = mMatLeft.timestamp;
mRgbSubscribed = true;
DEBUG_VD("Left image retrieved");
}
My Questions:
- Is this modification sufficient to ensure that Isaac ROS nodes (such as those using TensorRT or GXF) can directly consume images in GPU memory?
- Would integrating Isaac ROS Argus Camera provide any advantage, or is it unnecessary since I am using the ZED SDK?
I would appreciate any insights from those who have experience optimizing ROS2 image pipelines for GPU processing.
Thanks in advance!