We use a segmentation network followed by contour detection on the left RGB image of the zed2i.
However, since the contours of the object to be detected no longer have the correct depth in the depth map, the plan was to calculate some point cloud points using the screen coordinates of the RGB image and a correct depth value - via the intrinsic camera matrix of the left RGB camera. Since I wanted to compare this method with the results of the zed2i point cloud calculation, I also used my method with points that had “correct” distance values in the depth map. However, my calculations are not as accurate as those of the internal zed2i point cloud.
The calibration_params.left_cam.disto Values are all 0…
Does one have to perform the calculation via the disparity map? Thank you for any suggestions.