I would like to ask several questions regarding the new camera fusion API
I plan to use 3 or 4 cameras to capture human body data. Can this API help me fusion the generated point cloud, mesh, and 3D keypoints for the human body?
If I plan to attach multiple cameras to one server, how many zed cameras I can use at the same time? Is the 3080 GPU enough to handle 3 or 4 Zed 2i cameras?
It seems that BODY70 API is removed in the new zed SDK. If I choose to use another project, such as MinimalHand (CVPR 2020), to extract hand keypoints, it will only give us 2D keypoints. How I can leverage depth information to lift the extracted 2D keypoints to 3D?
This question is about the whole ZED SDK. Regarding the running speed, will CPP version faster then Python version?
Thank you for your reply. Regarding the fusion of skeleton keypoints, if I use other keypoint detection algorithms, such as Google’s MediaPipe and MinimalHand, instead of Zed’s SDK, can Zed fusion API still be able to fuse the data? Moreover, I would like to ask when will the BODY70 API come back?
Another question is that, if I have multiple jetson AI boards (such as Jetson NX), I should be able to conduct “network” fusion setup described in Fusion | Stereolabs. Is it correct? In such case, does the connection between jetson and the server necessarily need to be cable? is it feasible to connect in a WiFi environment?
Thank you for your reply. Regarding fusing keypoints got from other algorithms, could you give me some hints that how to implement it, for example, some papers or open-source algorithms?