Multiple Operation Multiple Camera


It might be not correct place to ask but I would like to ask here.
I have 2 zed2i camera and I am doing spatial mapping with two. I am planning to run arucomarker detection and create some algo that can inform location of aruco to the spatial mapping class.

Aruco marker detection should work at the same time with spatial mapping.
Is it possible to grap same camera frame ? I think that 2 zed will have separate object and I can use for the purpose of spatial mapping and aruco detection. I think Command behavirol pattern fits my case but wonder your advice.

Hi @davide,

It is the correct place to ask :wink:

I advise looking into our Fusion module. You can do both multi-camera spatial mapping, and use the retrieveImage method to get the image from the camera you want to run your ArUco detection on it.

The retrieveImage from Fusion is synchronized with Fusion’s output.

I invite you to try our multi-camera spatial mapping sample to get a first grasp of this.

Hi @JPlou ,

Thank you for answer. Image that you are doing multi spatial mapping and getting 3d point cloud in real time and this process continuing then you have another thread that check if there is a aruco marker and you found the marker and know location of it but it is relative to the captured camera’s 2d world. Stereolabs has repo for aruco marker detection and it seems return 3d position of it however how can you know the position of the aruco marker in 3d point cloud. We detected it but cant transfer directly relative location infomation to the spatial mapping space

Hi @davide,

You can access the pose (in the Fusion reference frame) of the camera you’re using to track the ArUco using the getPosition() method. From that, you can transform the camera’s 3D space into the Fusion’s 3D space.

To get the 3D measure from the 2D one, you can examine the results of the depth matrix from retrieveMeasure at the 2D coordinates that interest you.