Making a 3D video you can move around in

I’ve seen stills where the ZED camera uses the Lidar to match the image it is taking to the environment. I assume I can take one of those then launch it on an Oculus to move around in. But is it possible to combine stills to make a moving video you can move around in? Or perhaps use multiple ZED’s from different angles to catch the movement in the middle?

(I’m a beginner so I apologize for posting here but I couldn’t figure out where I could ask besides here).

Hi @misher, welcome to the forums

For reference, the ZED cameras do not use LIDAR, but stereoscopy and AI to map the environment: Depth Sensing Overview | Stereolabs / Spatial Mapping Overview | Stereolabs

You can indeed export a mesh generated by the spatial mapping module, and use it to build a VR app. As for capturing a moving 3D scene:

I would need more details on your project or more questions to answer more accurately, so do not hesitate if you have some.

Best,
Jean-Loup

Thank you very much I will read over the links and get back to you. I am not a programmer so this is definitely going to take me a while. So basically I’d have to build an app there isn’t a way to lead a 3D scanned environment in a video app?

@misher To sum-up : you could record SVO and turn them into textured/colored point clouds (see the point cloud gif on this page). You could use that in your engine of choice to make a VR app.

More in-depth:

The easiest/quickest way to do what I think you want to do involves indeed at least knowing how to use a 3D engine a little.

First, you’d have to calibrate your cameras using ZED360, or just use one camera.

You’d record the SVOs as synchronously as possible, we provide a sample for it, or you can just use ZED Explorer.

Our Unity plugin provides a Point Cloud scene example, you could use it to merge several point clouds (actually, display them over each other) generated from the SVOs previously recorded. You would have to position your PointCloudManager objects according to the translations and rotations given by ZED360, or adjusted by hand so that it overlaps cleanly. I would use the Neural mode too, to have the cleanest depth and point cloud.

What I just said requires little to no code at all.

However, it is not optimized well in Unity because it’s not a very usual use case, displaying several point clouds, especially with neural depth is quite heavy on the computer. But that could be the way to your 3D videos.

Best,
Jean-Loup

Much appreciated. Lots to learn. I’ll probably be back in a month or two to ask questions.