I am trying to use 2 zed2i camera to reconstruct scene with these 2 camera. I have found multi camera spatial mapping support in zed-sdk sample. Even though there is no instructuion how to use this app, I used zed360 tool to get camera json. I rigidly assemble 2 camera on 2 different head of long stick. I supposed that after 6-7 iteration of calibration process on zed360 it is enough to provide calibration file after that I supplied this file to multi-reconst app but this is not work as mono spatial mapping. It is completly dark. Can you check on this link.
So my question is:
What kind of things should i aware of to get nice and colored point cloud. ( lighning is enough if it is not enough then it could not get enough feature )
Mono camera works great.
There is a multi camera recording sample but is it enough to record sync svo to supply the output for multi spatial mapping app ?
For now, it’s not possible to save the textures when using the spatial mapping fusion, it’s only possible in mono-camera. Thanks for the feedback, it’s something we want to do but it probably won’t be short-term.
On the other side, yeah you can use the multi-recording sample to record SVO, but you will also need the ZED360 configuration file for the camera to run the spatial mapping fusion sample with the SVOs.
Thank you for your answer. I wonder that is it possible to run mono spatial mapping for dedicated camera respectively their serial number. If the calibration is correct and getting 2 seperate mono spatial mapping output I can use ICP or other aligning algorithm to make align those 2 mapped mesh.
Do you have any other suggestion ? If I decide to modify the multi camera spatial mapping is it technically possible to make meshed one ?
it seems that a user have done with it
My last question is :
If I assemble 2 camera on 5m long stick then getting calibration file using with zed360 when stick is motionless. Can I use this calibration file when i move stick holding 2 camera still same pozisition. I mean that I will try to turn around an object with stick to get full 3d structure
I think so, the resulting mesh is “just” a ply file.
I did a quick test and generated one with spatial mapping fusion. I was able to paint the vertices and export the resulting mesh to .ply from Blender, and it appeared colored in Meshlab afterward.
It should be very doable with code, I don’t have directions to give to merge the textures though.
I’ve tested again. I received good shape of object. I have measured the acquired object using 3rd party point cloud viewer the lenght of object absolutely correct but width and height is not match the real measurment. The camera was looking the object 45 degree pitch, can pitch effect the height ? I dont think so but I could not understand what was effected to height measurment. The object shape is exactly match acquired point cloud ?
The quality of the point cloud and the mesh are affected by the environment. The pitch might affect the detection if it prevents the camera from seeing enough points. Having the camera see the object from other angles will help (though I understand the 5m long stick is probably part of a constraining setup)
If you’re able to provide metrics and reproducing steps, I’d be happy to help further.