Hello all,
I am using ZED2 and considering getting another one for Body Tracking application. I was wondering if there is a difference if I use ZED2i or the normal ZED2 or even ZEDx and if that will affect the integration and sensor fusion between them.
As a saw, the difference between ZED2 and ZED2i is that the ZED2i is more suitable for harsh or wet environments and that it has additional focal length option. Is there another difference?
Thank you,
The connection are differents:
ZED2 has an integrated USB cable
ZED2i has an USB-C connector, so you can replace the cable if it is damaged (or change its length)
ZED X is GMSL based (only supported by Jetsons)
Thank you for your answer.
I understand better now. I still want to be clear on an important part which is multi camera fusion and integration. If I buy ZED2i, would it work and integrate fine with my ZED2 especially for Body Tracking application? Or do I have to buy the same model for all of the cameras to integrate them?
Thank you,
I wanted to say congrats for the new SDK it’s amazing !
I tried ZED360 it works well .
I have a question, it supports Local USB workflow and ZedHub . In my setup I’m using sending/receiver stream with multiples jetson and one master computer, will be in future possible to use ZED360 to make calibration with local network without Zed Hub ? Thanks in advance
Yes @peanuts, it is possible to do this, you simply have to update the generated file according to your setup.
Change the serial number of your camera (if needed) and change the ‘zed’ part.
[‘zed’][‘type’] will be ‘STREAM’ and [‘zed’][‘configuration’] will be the ip adress in the format “192.168.X.X:port”.
Currently we can generate the calibration file via ZED360 only via the cameras in usb or zed hub. Since the usb bandwidth is limited, I set up my own setup (One zed per jetson) x8 which are all connected to a switch. Then I processed everything on the master computer .
When launching ZED360 and “Auto Discover”, are the local network cameras detected? (I can’t test it now) as I don’t use ZedHub. Or can we generate the calibration by another method/script? Thank you for your quick answer.
My answer was not detailed enough, indeed you can’t detect the streaming camera.
But you can create your own json file using this template: calibfile_params.json (588 Bytes)
Hello, I come back to you after some tests. I used the template of your file and at first sight I get the streams . I encounter several problems (bugs?), sometimes the cameras are not displayed and I think because ZED360 launches the fusion even before the stream is present (disadvantage of the stream VS usb, can be make asynchronous launch). I was able to try with 2 and 3 cameras, when recording the calibration, the file is not updated and also when another file, different from the template.
Example logs of failed lauched cameras :
INIT FUSION
Setup ZED 22516499
Setup ZED 26461602
22516499 START RUN ZED
Setup ZED 29813646
26461602 START RUN ZED
START RUN FUSION
29813646 START RUN ZED
i tried and succeeded in creating a configuration files for 2 zed x cameras, like the json provided. where should i put them in order for the cameras point cloud to adjust accordingly ?
I’m working with two ZED2i cameras and performing extrinsic calibration using multiple ArUco markers placed in the environment. I’m using the zed-aruco multi-camera example to estimate camera poses and generate a MultiCamConfig.json file.
The generated JSON file works fine for visualization inside the ArUco localization example.
However, when I use this same file in the ZED Fusion sample from the body tracking repository (ZED SDK Multi-Camera Fusion), the fused point clouds are not aligned (see attached image).
Interestingly, when I generate the configuration using ZED360, the fusion output aligns perfectly.
Details:
I’m using DICT_6X6_100 ArUco markers, each 200 mm in size.
The coordinate system in both the ArUco and fusion codes is set to COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP.
In the fusion config generation, I’ve set override_gravity = true.
Poses are transformed using the detected ArUco markers and exported directly.
Issue:
Even though the ArUco-based configuration file structurally matches the ZED360-generated one, the result in fusion is clearly misaligned (see attached screenshots). It seems like a gravity misalignment or inconsistent world frame definition.
Questions:
Are there any extra steps needed to align ArUco-based poses with the gravity frame?
Is there a standard approach for fusing ArUco-based world coordinates with IMU-based gravity alignment?
How can I debug or visualize whether gravity is correctly overridden in Fusion?
Any suggestions or examples would be very helpful.