Fusion of 3D Stereo-camera SLAM with 3D LiDAR SLAM

Hi everyone. I ordered ZED 2 for my doctoral thesis that I am going to use as a useful tool for my project. What I intend to do is fusing 3D SLAM via ZED 2 with 3D LiDAR. Both of them are installed on the robot and I have 2 LiDAR sensors.
I looked at some solutions as there are some approaches to do this with RGB cameras like ORB methods etc. But I already have ZED so not really necessary.
What I have in mind is basically after the extrinsic/intrinsic calibration of both LiDAR and ZED, and after creating the 3D maps from both LiDAR and ZED, I am looking to know how to align those created 3D maps in the same world coordinates.
I appreciate any help.
Thank you!

Hi @FL399
the best way to align the two sensors is by providing a “tag” in the real world with known coordinates (usually the origin), detecting it with both the sensors, and calculating the relative positions with respect to it. Then you can use coordinate transform formulas to calculate the transform from the ZED to the Lidar or vice-versa.
For the ZED you can use for example an ArUco tag (see this repository), for the lidar you must find a way to detect it using only 3D information… this paper can be a good starting point for example, in this paper instead you can find a method directly studied for the ZED.
If you have already the calibration you only need to align the 3D point clouds generated by the two sensors.
There are two approaches that you can follow:

  1. align “single” point clouds as soon as they are received by the two devices, this requires a good localization to preventively transform them both in world coordinates
  2. align the final full map

for both the approaches I suggest you look at the ICP (Iterative closest point) algorithm available for example in the Open3D and in the PCL libraries.

4 Likes

Thank you Myzhar for your detailed answer!

1 Like