Is it possible to fuse the ZED camera’s estimated odometry frame with my own wheel odometry estimation to get a more reliable spatial map?
Asking because I noticed quite some drift in localization when the camera goes through featureless areas like white walls, and this impacts the spatial map quite a lot (e.g. duplicate walls at 20-30cm distance).
I’m using the ROS2 wrapper, but if not possible there I’m okay with implementing it otherwise.
If I use the method described in the nav2 tutorial, will also the ZED spatial mapping be influenced?
What I need in the end is a refined point cloud or mesh of the environment, and I want to produce it with the best possible localization that my robot can offer (VIO + wheel odometry).
How does the underlying spatial mapping pipeline from ZED get the updated localization frame from nav2?
No, it won’t. ZED Spatial mapping does not use external information as pose estimation. You should utilize ROS 2 packages for this; consider RTABmap, for example.