Hi, I have two zed2 camera in the same room environment and both camera is static(staple in same room but different position and angle). I want to merge both camera data so I will have more accurate result on detecting person. at first, I want to get the world coordinates from both camera, and try to get both camera positions base on same world view. then I want to joint both camera detected person keypoints base on the world coordinate. I tried the bodytracking/multi-camera from github but it is not working and have errors.
What errors are you encountering?
What version of the SDK are you using?
the sl.read_fusion_configuration_file() get error when try to read my json file which I created base on configuration file example in fusion document. and my sdk is 4.0.3 for jetson nano
Did you generate the file using ZED360 or from scratch?
Is it possible that you share the json file? I’m afraid I can’t help you much without taking a look.
If you don’t want to share it on the forum, you can send it, along with a link to this topic, to firstname.lastname@example.org and I’ll take a look.
hi when I use ZED360 in the jetson, it freezes the jetson after autodetect both camera, or after calibrate. so I never get the configuration json file fromzed360. so I created a program to create the json file base on the documentation format.
Looks like another problem altogether
Can you please send these 2 files either to the forum or email@example.com:
- the ZED Diagnostic result for your jetson, it will include data with which we can investigate why ZED360 does not work in the first place
- the json file you are using, so we can check the formatting
Thanks, I will try to do that soon. Back to the main topic, how do I create a mapping localization that have positions of both camera in same map?
Sorry, I don’t have another answer than “Use ZED360” and “Take a look at how our body tracking Fusion works in the sample we provide”.
ZED360 should create a configuration file giving all cameras’ coordinates in the reference frame of one of them, that’s its purpose. If it doesn’t work, that’s a pretty major issue that we can’t brush off. I’m sure there are other methods of calibration out there, the way ZED360 works is by tracking a person with multiple cameras and using the knowledge that it’s only one person, and the data associated, to know where cameras are relatively to each other.
If you want to implement that on your side, that’s as much info I have at disposal. I probably can’t help further on this subject without the things I asked in my previous messages, sorry.