Spatial mapping of a small area

In our application, we want to use a ZED2 to map a small area (about 0.8m x 0.8m x 0.8m) with high precision to avoid collisions of a robotic arm with the workspace. I’m playing around with spatial mapping in the ZED SDK, but results do not meet our requirements yet. Smaller object (e.g. a ball with 6 cm diameter) are missed by the spatial mapping. In the returned point cloud, there are a bit more details visible.

I played with a lot of parameters. So far, I got best results when using:

sl::InitParameters p;
p.camera_fps = 60;
p.camera_resolution = sl::RESOLUTION::HD720;
p.coordinate_system = sl::COORDINATE_SYSTEM::RIGHT_HANDED_Z_UP;
p.coordinate_units = sl::UNIT::MILLIMETER;
p.depth_mode = sl::DEPTH_MODE::ULTRA;
p.depth_minimum_distance = 200.0f;

and

sl::SpatialMappingParameters smp{sl::SpatialMappingParameters::MAPPING_RESOLUTION::HIGH};
smp.map_type = sl::SpatialMappingParameters::SPATIAL_MAP_TYPE::MESH;
smp.save_texture = true;
smp.resolution_meter = 0.01f;
smp.range_meter = 1.5f;

It seems the lower limit of resolution_meter is 0.01m. Is there a way to go lower?

Do you have any other suggestions to improve spatial mapping of our small area?

As our ZED2 is mounted on the robot, is there a recommended movement to get best spatial mapping results? For example, I think the ZED does not like rotations around lens axis too much. I got more artefacts (I guess from caused by drift) when doing so.

Thanks for your support!
Marcel

Hi @marscho
the spatial mapping module is designed to work in bigger areas, 0.8x0.8x0.8 m is a very little workspace.
I suggest you write your own point cloud fusion algorithm exploiting the iterative closest point (ICP) algorithm that is well suited for this kind of configuration.
The ICP algorithm is available in the PCL library, in the Open3D library, OpenCV, and in many different open source implementations on Github.