Fused Point Clouds

Hello all,

I’ve noticed that the ZED SDK 4.0 includes support for a fused point-cloud. For my project, I’d like to set up 2-6 ZED cameras around the same location, but overlap/fuse their point clouds into a single point cloud (IN REAL TIME, NOT PRE-RECORDED) so that some VR users would be able to view the scene immersively in real time. A few questions:

  1. Is it possible to fuse together multiple point clouds from different ZED cameras into a single point cloud in Unity in real time (NOT PRE-RECORDED)? I’ve seen documentation on using them for body tracking, but not for point clouds.

  2. Is it possible to have VR users walk around the point cloud scene? I’d like to enable multiplayer so multiple players can walk around the point cloud and talk to one another.

Thanks for any help in advance!

Hi @makami
Welcome to the Stereolabs community.

The Fusion module of the ZED SDK v4 actually provides Body Tracking fusion, but the Fused Point Cloud module is not yet ready to be publicly released.

The SDK Team is working on this feature and it will be probably released after the Summer.

Hey @makami,

Your post reminds me of this one from @misher, where I wrote some leads toward a workaround way to achieve what you want. Very much not a definite solution, but basically you can have several point cloud renderers displayed over each other. If you position them correctly using ZED360’s resulting configuration file data, you can have a kind of hacky prototype.

How you would have VR user walk in the scene is another matter entirely however, and is pretty much independent from the ZED SDK.

Best,
Jean-Loup

1 Like

Yeah, I’m trying a similar project that would be super useful to have a fused point cloud.

I want to feed the image data from three ZED Cameras onto a fine tuned YOLO model to detect an object from three different angles (in a 6m x 7m room), send the resulting custom box to the ZED SDK, and have the a fused point cloud or depth map from three cameras tell me the XYZ coordinates in the world space that I can then feed into Unreal Engine.

I basically want a physical object (not human) to trigger an action within an Unreal Engine scene according to the physical objects movement in the physical space.

I’ll probably try to hack something together using the Open3D library for this project.

Where will you announce the release of this feature? Twitter? Blog Post? … I want to make sure I don’t miss it.

Hi,

It will be announced on Twitter and the Release Notes at the very least (maybe in the form of a blog post).

1 Like

Me too. Fusing point cloud and 3D mesh from multiple cameras is also super helpful for my project