I have 3 ZED cameras and generated a yml with ZED360. The following is done with SVOs that I got from them in a static setup / scene.
I can generate Mesh/Point Cloud from all those cameras. I can fuse them without any errors. I can run the fusion and receive a result. But the result only contains the points of one of the cameras. Everything finishes with success.
I do not execute the spatial mapping functions of the single cameras but those of the fusion. But I also tried it the other way. As I said, everything appers to be good, the subsciption, tracking, extraction … I tried both, the whole mesh and chunks only …
The code is in C++ / Qt and very close to the multi-camera spatial mapping example but without the viewer, just saving the mesh/es.
I also tried lower frame-rates and many other things. Any idea what might be wrong ?
I’m using the latest 4.0.8. I don’t know why I didn’t try the sample. Sometimes I’m an idiot. But now I did and the behavior is unbelievable: It prints the following and then just sits there with 75% GPU power and does nothing … The OGL-window is grey …
Try to open ZED 31860539… ready !
Try to open ZED 43568349… ready !
Try to open ZED 46721650… ready !
Starting SVO sync process…
Found SVOs common starting time: 1705354197363440000
Setting camera 0 to frame 530
Setting camera 1 to frame 0
Setting camera 2 to frame 61
Must be 5 minutes now. And those meshes are not very large.
I tested a lot of your samples without any problems so far. \o/
BTW: In my own test I did not use the SVO sync since it is a static scene. Is it absolutely required to do that ?
Try to open ZED 31860539… ready !
Try to open ZED 43568349… ready !
Try to open ZED 46721650… ready !
Starting SVO sync process…
Found SVOs common starting time: 1705923989144323000
Setting camera 0 to frame 545
Setting camera 1 to frame 0
Setting camera 2 to frame 66
CUDA error at /builds/sl/ZEDKit/lib/src/sl_core/utils/util.cu:459 code=4(cudaErrorCudartUnloading) “cudaCreateTextureObject(tex, &resDesc, &texDesc, NULL)”
CUDA error at /builds/sl/ZEDKit/lib/src/sl_core/utils/util.cu:459 code=4(cudaErrorCudartUnloading) “cudaCreateTextureObject(tex, &resDesc, &texDesc, NULL)”
CUDA error at /builds/sl/ZEDKit/lib/src/sl_core/utils/util.cu:459 code=4(cudaErrorCudartUnloading) “cudaCreateTextureObject(tex, &resDesc, &texDesc, NULL)”
CUDA error at /builds/sl/ZEDKit/lib/src/sl_core/utils/util.cu:459 code=4(cudaErrorCudartUnloading) “cudaCreateTextureObject(tex, &resDesc, &texDesc, NULL)”
CUDA error at /builds/sl/ZEDKit/lib/src/sl_core/utils/util.cu:459 code=4(cudaErrorCudartUnloading) “cudaCreateTextureObject(tex, &resDesc, &texDesc, NULL)”
CUDA error at /builds/sl/ZEDKit/lib/src/sl_core/utils/util.cu:459 code=4(cudaErrorCudartUnloading) “cudaCreateTextureObject(tex, &resDesc, &texDesc, NULL)”
CUDA error at /builds/sl/ZEDKit/lib/src/sl_core/utils/util.cu:459 code=4(cudaErrorCudartUnloading) “cudaCreateTextureObject(tex, &resDesc, &texDesc, NULL)”
CUDA error at /builds/sl/ZEDKit/lib/src/sl_core/utils/util.cu:459 code=4(cudaErrorCudartUnloading) “cudaCreateTextureObject(tex, &resDesc, &texDesc, NULL)”
Segmentation fault (core dumped)
This machine has 64GB RAM but only a GTX1060 … I did not see any of this when running my own code, just that the mesh was only from one of the cameras. OK, you can’t debug this for me. I just wanted to know if there is some known behavior at work here.
Did the same on the Jetson Orin. There it is Cuda 11.4. The other machine is a PC running Ubuntu 20.04. Same on the Orin, but set up by another person, new compile, etc. That’s where the cameras are, so I’ll go and run it life and see if that works.
The segfault happens when I close the OGL viewer window.
Well, tried it with the real cameras … same deal. I see the Mesh getting larger on in the viewer, but even if I close it after just a few images it still crashes … on both machines, and on the Orin with real cams and with the SVOs. On the PC with different SVOs. Dunno …
Sorry for the delay.
Would you be able to send us:
your code (or confirm you’re using our raw spatial mapping multi-camera sample)
SVOs and config json file reproducing the issue
You can send them here or through support@stereolabs.com, referencing this post in your mail.
We’ll focus on fixing the segfault on Orin, because the issues on the 1060 seem to be related to the 1060 not being able to handle the fusion.
the files are too large to eMail, so you can please download them from our server. I’ll send the links and login via eMail - subject “JPlou Support”. I can confirm that I used the C++ demo code for the tests. Just cmake … make and run with the calibration JSON from ZED360. It would be great if you could have a look. Thank you!
Any news ? We really need that fusion to work. I know 4.0.8. is “early access” but I would still be greatful for any hint how we can get a fused point cloud out of our setup. Thank you !
Sorry for the delay, I could not take a look before today, but I’ll get back to you shortly.
After taking a look at the SVOs though, I can say 2 things:
Our spatial mapping works best when the cameras are moving, as it makes full use of the positional tracking of the camera to improve its accuracy.
In the same idea, it’s not intended for a big overlap of the camera, especially if they don’t move.
That does not solve the crash, though. To be sure, how do you close the sample? Using “q” or another way?
yeah, the spatial mapping was not our first choice. What we need is a 3D representation of a static scene but from all sides. Basically we want to meassure something sitting in the middle with maximum precision … And we would like to have the colors if possible. Your Fusion made us hope that we can avoid to handle all the 3D calibrations by ourselfs. So if there is a recommended way on how to do this please tell :-).
You are right - stopping the demo with ‘q’ works does not crash. The point cloud is of course just black so it is very hard to tell if it is from all 3 cameras. I’ll test and report later. Any hint on how to do this better would be highly appreciated.
Sorry for the confusion, that’s indeed one of the samples where it’s important.
Unfortunately, the fusion of spatial mapping does not give colors for now.
The recommended way to capture a static scene would be to use spatial mapping with 1 camera and move the camera around the scene. That does require moving the camera around though.