Multi camera spatial mapping

Hi,

I am trying to use 2 zed2i camera to reconstruct scene with these 2 camera. I have found multi camera spatial mapping support in zed-sdk sample. Even though there is no instructuion how to use this app, I used zed360 tool to get camera json. I rigidly assemble 2 camera on 2 different head of long stick. I supposed that after 6-7 iteration of calibration process on zed360 it is enough to provide calibration file after that I supplied this file to multi-reconst app but this is not work as mono spatial mapping. It is completly dark. Can you check on this link.

So my question is:

What kind of things should i aware of to get nice and colored point cloud. ( lighning is enough if it is not enough then it could not get enough feature )
Mono camera works great.
There is a multi camera recording sample but is it enough to record sync svo to supply the output for multi spatial mapping app ?

Thank You

Hi @davide,

For now, it’s not possible to save the textures when using the spatial mapping fusion, it’s only possible in mono-camera. Thanks for the feedback, it’s something we want to do but it probably won’t be short-term.

On the other side, yeah you can use the multi-recording sample to record SVO, but you will also need the ZED360 configuration file for the camera to run the spatial mapping fusion sample with the SVOs.

Explanations on how to modify the ZED360 configuration file for SVO can be found in this post : Fusion Sample Code with SVO Files - #2 by JPlou

Hi @JPlou ,

Thank you for your answer. I wonder that is it possible to run mono spatial mapping for dedicated camera respectively their serial number. If the calibration is correct and getting 2 seperate mono spatial mapping output I can use ICP or other aligning algorithm to make align those 2 mapped mesh.

Do you have any other suggestion ? If I decide to modify the multi camera spatial mapping is it technically possible to make meshed one ?

it seems that a user have done with it

My last question is :
If I assemble 2 camera on 5m long stick then getting calibration file using with zed360 when stick is motionless. Can I use this calibration file when i move stick holding 2 camera still same pozisition. I mean that I will try to turn around an object with stick to get full 3d structure

Hi @davide,

To clarify, the mesh you get is black but should be a correct merged mesh from the cameras. The issue is the colors/textures.

You should be able to run multiple spatial mapping samples at the same time, yes. You can merge the resulting meshes/textures using any method of your choice.

Can I use this calibration file when i move stick holding 2 camera still same pozisition

Yes, I would even say that’s the way to use it. The spatial mapping benefits from the cameras moving, and the calibration file tells the Fusion where cameras are relative to one another.

Hi @JPlou

Thank you for your answer. Is it technically possible to make colors/textures on black meshed result ? I try to estimate if it is get it return my labor to coding for colors/textures feature.

Hey @davide

I think so, the resulting mesh is “just” a ply file.
I did a quick test and generated one with spatial mapping fusion. I was able to paint the vertices and export the resulting mesh to .ply from Blender, and it appeared colored in Meshlab afterward.

It should be very doable with code, I don’t have directions to give to merge the textures though.

Hi again @JPlou ,

I’ve tested again. I received good shape of object. I have measured the acquired object using 3rd party point cloud viewer the lenght of object absolutely correct but width and height is not match the real measurment. The camera was looking the object 45 degree pitch, can pitch effect the height ? I dont think so but I could not understand what was effected to height measurment. The object shape is exactly match acquired point cloud ?

Hi @davide,

The quality of the point cloud and the mesh are affected by the environment. The pitch might affect the detection if it prevents the camera from seeing enough points. Having the camera see the object from other angles will help (though I understand the 5m long stick is probably part of a constraining setup)
If you’re able to provide metrics and reproducing steps, I’d be happy to help further.

That would be:

  • SVOs of your object capture
  • ZED360 calibration file of your setup
  • ground truth of the object’s dimensions
  • please also include a ZED Diagnostic result file, it will give us more info on your system.

You can send them to support@stereolabs.com if you don’t want to put it here.

Hi @JPlou ,

Thank you for your answer. Let me share obj file with you. spatial mapping and multi spatial mapping exports. You can see the big difference. I got calibration file zed360. I will share svo also.