Fusion of 3D Stereo-camera SLAM with 3D LiDAR SLAM

Hi @FL399
the best way to align the two sensors is by providing a “tag” in the real world with known coordinates (usually the origin), detecting it with both the sensors, and calculating the relative positions with respect to it. Then you can use coordinate transform formulas to calculate the transform from the ZED to the Lidar or vice-versa.
For the ZED you can use for example an ArUco tag (see this repository), for the lidar you must find a way to detect it using only 3D information… this paper can be a good starting point for example, in this paper instead you can find a method directly studied for the ZED.
If you have already the calibration you only need to align the 3D point clouds generated by the two sensors.
There are two approaches that you can follow:

  1. align “single” point clouds as soon as they are received by the two devices, this requires a good localization to preventively transform them both in world coordinates
  2. align the final full map

for both the approaches I suggest you look at the ICP (Iterative closest point) algorithm available for example in the Open3D and in the PCL libraries.

5 Likes