On my project as it is I’m able to broadcast the ZED 2i camera feed to another device on the same network and build a spatial mapping in it.
The thing is, as it is a live stream, i wanted to capture the movement of a person ideally. For this I thought to, on the spatial mapping visualization, only display the last 50 to 100 images (or some other value after testing).
Currently, the spatial mapping module uses a thread in the background which retrieves all of the point cloud information and fuses it into a single mesh. All the information is stored, and cannot be removed as the processing goes on.
You could however activate and deactivate the module manually if you wish to map a specific sequence when a person is in view. You could use the object detection module to trigger the mapping automatically as well when a person enters the scene.