Is it possible to fuse data from pre-recorded data? Suppose I have calibrated multiple cameras, saved the calibration file, and then recorded videos on these cameras. Later on, can I use the calibration file with the pre-recorded video to generate fuse data?
Hi @felixshing,
Yes, using the Fusion module with SVOs along with the corresponding configuration file from ZED360 is possible. I went a little more in-depth here: Fusion Sample Code with SVO Files.
The process is the same that what you would do for live cameras, you just need to modify the configuration file.
Thank you for your reply. Regarding the configuration file, actually, we actually recorded the video several months ago. At that time, the 4.0 SDK was not yet released. Thus, we just manually measured the relative position and orientation (extrinsic) between different cameras. How could I manually setup the configuration file based on our measured results?
I’m afraid we don’t have an automatic solution for older multi-camera SVOs, if your SVOs overlap and include a unique person walking alone for 1 minute, you can use ZED360 to recalibrate.
In any other case (and to answer your actual question now that I read it correctly), you will need to edit the configuration file manually.
Its structure is detailed in the post I linked and the doc here: Fusion | Stereolabs
One camera is always the reference point, its coordinate being all 0 (rotation and translation) except its height which is its actual height from the ground (so the origin is actually the ground under the camera). The other cameras are positioned and oriented in the reference frame of this first camera.
You should be able to put your measured results in a configuration file based on this. Then, I imagine you could iterate using a multi-camera body-tracking sample to refine your results.
Do not hesitate if you have more questions, but this could (I assume will) prove tricky. Please update with your results!