I’m using the SDK and python API to extract the pointcloud measure, depth map and rgb image. The pointcloud however has lots of holes that are not present in the depth image.
The pointcloud in the Depth Explorer also looks a lot better.
Furthermore, when I manually create a pointcloud by iterating over all pixels and obtaining a 3d point using the depth map and the intrinsics camera, the pointcloud again contains less void regions.
below on the left you see the output of the retrieve_measure method, on the right the pointcloud that I created myself by iterating over all pixels in the depth map.
Are these the ones that were set in Depth Viewer ?
Also, what happens if you increase the resolution ?
Yes the settings in the depth viewer are the same and the problem persists when the resolution is changed.
Could you maybe elaborate on how the pointcloud of the ZED SDK is created? When I manually create a pointcloud by iterating over all pixels of the depth map, the pointcloud looks better than the one I got from the SDK. This is something I found very strange actually.
Well, you can either display a point cloud retrieve by the SDK, in this case it will depend a lot of the confidence thresholds. You can also retrieve fused point cloud if you environment is static, these are generated by the spatial mapping module. We provide examples here : zed-examples/spatial mapping/basic at master · stereolabs/zed-examples · GitHub