JPlou:
Hi @misher , welcome to the forums
For reference, the ZED cameras do not use LIDAR, but stereoscopy and AI to map the environment: Depth Sensing Overview | Stereolabs / Spatial Mapping Overview | Stereolabs
You can indeed export a mesh generated by the spatial mapping module, and use it to build a VR app. As for capturing a moving 3D scene:
the Fusion module was released recently, allowing to merge body tracking from different cameras.
you could use the point cloud view from one or several cameras, for example in Unity, providing the positions of the cameras to make the point clouds overlap correctly
I would need more details on your project or more questions to answer more accurately, so do not hesitate if you have some.
Best,
Jean-Loup
Thank you very much I will read over the links and get back to you. I am not a programmer so this is definitely going to take me a while. So basically I’d have to build an app there isn’t a way to lead a 3D scanned environment in a video app?