I need to operate a ZED2i underwater in order to reconstruct a mesh of the sea bottom.
To do so, we have built a waterproof case and performed underwater calibration. The calibration steps are:
- Put the camera and a waterproof checkerboard in the water and acquire images from different angles/distances.
- Extract unrectified image pairs of the checkerboard, turning off self-calibration.
- Perform stereo calibration with OpenCV according to https://www.stereolabs.com/docs/opencv/calibration/.
The calibration seems to be okay as I was able to obtain undistorted images and improve depth sensing.
I have conducted a few tests in shallow water, measuring objects at distances of less than 3 meters.
However, when attempting to perform spatial mapping, the results are poor.
When running the “spatial mapping” example on a recorded SVO, the tracking state is most often set to “SEARCHING”.
As a result, the final mesh does not resemble the real object at all.
I think this could be due to the challenging scenario I’m using, such as the presence of sand and light reflection.
However, I want to check if I’m missing something.
I believe that I could share both the SVO and the calibration data if needed.
Any tips on how to improve the results would be appreciated!