Hello,
I’m using a Jetson device running ROS 2 Humble along with the ZED X camera. I’m currently trying to fuse the ZED’s visual-inertial odometry (VIO) with GPS data using the robot_localization package.
I’ve noticed that the ZED only publishes visual odometry in the odom frame, and not the fused odometry. The topics that include both visual odometry and IMU data are published in the map frame.
Is there a topic that provides VIO data specifically in the odom frame, or a recommended way to obtain it?
Additionally, I’d like to know if there’s a best practice for fusing visual odometry, IMU, and GPS using the robot_localization package in this setup.
I have tried using zed ros2 wrapper to fuse GPS and VIO but I stuck with this issue.
This is not correct. The odometry published with the /odom topic is the result of a Visual Inertial Fusion algorithm.
The /pose message (map frame) is the result of our full localization with loop closure.
Could you please clarify which instructions you are referring to? Is there any additional documentation that I might have missed?
I tried following and implementing the example provided in the tutorial. My SDK GNSS fusion works properly, but in the ROS2 wrapper the fusion does not complete — it keeps showing “Calibration in Progress” and remains stuck there. I also tried driving it around, but that did not resolve the issue.
Any guidance on what might be missing or required steps to resolve this would be greatly appreciated.
Have you tried to change the parameters to reflect the same configuration of the SDK application that you tested?
Are you moving your robot, or are you waiting for the calibration to complete before moving it?
If the robot remains static, the calibration cannot complete.
Yes, I have tried to replicated the parameters similar to my SDK settings.
The only difference is when I use SDK my GPS is directly connected to Jetson and I use the tutorial code to generate the GPS message with suitable type.
When using the ROS2 wrapper, I read messages from the Pixhawk and convert them into NavSat messages, which are compatible with the ROS wrapper fusion code. All other parameters are the same in both the SDK and ROS2 code.
I am using ZED SDK 4.2 and the ROS2 wrapper humble release 4.2.5 for these experiments. In the ROS2 wrapper, I noticed that when I set the gnss parameter to true, the zed_camera_launch files overwrite it and reset it to false. To start the fusion, I need to manually change enable_gnss in the zed_camera_launch file to true.
Yes I am moving robot for the calibration to complete but that does not help and I get stuck on the error “Calibration in Progress” .
Verify that the covariance information is correctly propagated.
I recommend you use the latest ZED SDK v5 and the latest ZED ROS2 Wrapper.
We are not upgrading/fixing older versions of the ZED ROS2 Wrapper that have been frozen.
Hello @Myzhar,
I’m testing the ZED SDK 5.0.6 together with the latest ROS2 wrapper and have a few questions about the wrapper’s behavior.
It’s my understanding that the ~/geopose topic provides the filtered latitude/longitude poses after GNSS + VIO fusion. I am running tests outdoors.
If that understanding is correct, then using GNSS and VIO fusion with the ROS2 wrapper works properly in areas with a strong GPS signal. However, when I enter locations with a weak GPS signal and large covariance, the fused pose appears to drift along with the GPS drift.
Is this behavior expected?
Additionally, can I use the ROS2 wrapper to transition smoothly from an environment with good GPS reception to a GPS-denied environment?
Normally, this should not happen. If the covariance of the GPS reflects the weak quality of the signal, then the fusion algorithm will trust the VIO information and provide correct tracking.
Yes, as long as the covariance information is good.