Spatial Mapping: Clarification on range_meter Parameter

I’ve been working on a project involving spatial mapping with the ZED SDK. I have a couple of questions related to optimizing my spatial mapping process.
Depth Range (range_meter parameter):
While I understand the purpose of the range_meter parameter in SpatialMappingParameters, I’m facing challenges in adjusting this for my application. I’ve tried setting the range_meter to various values (e.g., 0.2, 0.5, 1.5), but haven’t seen notable differences in the outcome. I intend to manually specify the range of point cloud data used in the model generation. Can you provide more insights on how to effectively utilize this parameter?

Velocity Considerations:
Is there a way to configure the spatial mapping process to avoid recording data when the camera is moving too fast? My goal is to avoid generating point clouds from data captured at high velocities which might affect the model’s accuracy.

Additionally, I noticed there’s an allowed_range attribute in the SDK documentation. Can this be leveraged to further refine the spatial mapping process?

Thank you for your guidance. I appreciate your support in helping me optimize my project’s spatial mapping procedures.

Hi,

Sorry for the late reply. Are you talking about this range Spatial Mapping Module | API Reference | Stereolabs ?

The depth range you mention is about the depth, which has a high influence on spatial mapping.

The allowed range is a clamp, anything outside of this range should not be mapped.

Unfortunately, you cannot filter high velocity moves. This would break the tracking, and the tracking is actually the weakness of the spatial mapping. For that reason, I advise you always use POSITIONAL_TRACKING_MODE::QUALITY, and don’t map for too long.

Thanks for your return.
By the way ,how to set POSITIONAL_TRACKING_MODE::QUALITY in python
Besides, the allowed range is a clamp. How to set the clamp. For example, I want to use the range 0.1m to 1m. how to set it?

The documentation is here:
https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1PositionalTrackingParameters.html#a4e3002f49172c33aad8fff45cc127a19

Basically you must set it in the PositionalTrackingParameters.

Thanks for your return.
Besides,
I am currently engaged in a project that involves simulating interactions between robots and plants within a MuJoCo environment for reinforcement learning purposes. To facilitate this, I am considering the integration of real-time octree data, which would be instrumental in defining the state space for the learning algorithm.

I am using the ZED Spatial Mapping Module for generating 3D models and am keen on understanding how I might be able to extract or compute octree representations from these models in real-time. Specifically, I am looking for a solution that would allow me to receive continuous updates to the octree as the environment changes or as the sensor acquires new data.

Could you provide guidance on whether the ZED Spatial Mapping Module supports this feature? If so, I would greatly appreciate any documentation or examples on how to implement this.

Understanding this capability is crucial for the progression of my research, and any assistance you could offer would be greatly appreciated.

Hi,
The spatial mapping module doesn’t support this kind of format, however, you could use a third-party library like Octomap https://octomap.github.io/
The idea would be to map the environment using the spatial mapping module with fused point cloud mode, then at each update, you would have to insert the resulting point cloud (ideally only the newly updated part using the chunks) into the octomap.
Alternatively, you could directly insert the point cloud of the ZED (probably significantly downsized) into the octomap using the positional tracking to place the point cloud into a global “World” coordinate frame, this would be more efficient but also noisier since it would be the “raw” point cloud.