Incorrect initial camera position when displaying point cloud captured by Zed 2i on MeshLab

Hello, I found that when I try to display the point cloud captured by Zed 2i on MeshLab, the initial camera position is way far away from the correct posion. And each time I need to drag the point cloud a long time to put it to the right position. For example, this is the initial scene when displaying point cloud on MeshLab


And this is the correct position

How can I make the initial camera position correct in MeshLab?

Hi,

meshlab simply computes the center of the point cloud to put the camera in front of it (and it also adjust the scale) .

Thank you for your reply. Then how can I let the meshlab knows the correct position of the camera? My camera is fixed so its extrinsic should be 0 (or near 0). I used to run the follwing code for generating point cloud of human body from a RGB-D sensor that rotates around the human. And the point cloud generated from this code has the correct initial camera position in MeshLab. But I don’t know how to apply this by using Zed SDK. Could you help me this?

Moreover, I would like to ask why the point cloud generated from zed has a long sparse “shadow”? It should not be there and makes the center of point cloud totally devaite from the correct position.

def generate_pcl(data_root):
    print('Generating colored point clouds...')
    print('Loading {}'.format(data_root))
    sub_folder_list = sorted(os.listdir(data_root))
    for sub_folder in sub_folder_list:
        if not os.path.isdir(os.path.join(data_root, sub_folder)):
            continue
        print('Processing sequence: {}'.format(sub_folder))
        render_image_list = sorted(
            glob.glob(
                os.path.join(data_root, sub_folder, 'render', 'image',
                             '*.png')))
        render_depth_list = sorted(
            glob.glob(
                os.path.join(data_root, sub_folder, 'render', 'depth',
                         '*.tiff')))
        cam_infos = np.load(
            os.path.join(data_root, sub_folder, 'render', 'cameras.npz'))

        data_dir = os.path.join(data_root, sub_folder, 'pcl')
        os.makedirs(data_dir, exist_ok=True)
        for i in tqdm(range(len(render_image_list))):
            cam = o3d.camera.PinholeCameraIntrinsic()

            cam.intrinsic_matrix = cam_infos['intrinsic']
            cam_ext = cam_infos['extrinsic'][i]

            rgb_raw = cv2.imread(render_image_list[i])
            rgb_raw = cv2.cvtColor(rgb_raw, cv2.COLOR_BGR2RGB)
            depth_raw = np.array(Image.open(render_depth_list[i]))
            depth_raw[depth_raw>10] = 0
            depth_raw = o3d.geometry.Image(depth_raw)
            full_pcd = o3d.geometry.PointCloud.create_from_depth_image(
                depth_raw,
                intrinsic=cam,
                extrinsic=cam_ext,
                depth_scale=1000,
                project_valid_depth_only=False)

            temp = np.array(full_pcd.points)
            temp = np.nan_to_num(temp)

            temp = temp.reshape((np.array(depth_raw).shape[0],
                                 np.array(depth_raw).shape[1], 3))

            valid_2d_pixels = np.array(
                np.where(np.any(temp, axis=2) == True)).transpose()

            temp = temp[valid_2d_pixels[:, 0], valid_2d_pixels[:, 1], :]
            temp = temp.reshape((-1, 3))
            tmp_color = rgb_raw[valid_2d_pixels[:, 0], valid_2d_pixels[:,
                                                                       1], :]
            tmp_color = tmp_color.reshape((-1, 3)) / 255.0

            pcd = o3d.geometry.PointCloud()
            pcd.points = o3d.utility.Vector3dVector(temp)
            pcd.colors = o3d.utility.Vector3dVector(tmp_color)
            pcd.estimate_normals(
                search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.1,
                                                                  max_nn=30))

            cam_C = -np.linalg.inv(cam_ext[:3, :3]) @ cam_ext[:3, -1]
            center_sphere = o3d.geometry.TriangleMesh.create_sphere(radius=0.2)
            center_sphere.translate((cam_C[0], cam_C[1], cam_C[2]))

            o3d.geometry.PointCloud.orient_normals_towards_camera_location(
                pcd, camera_location=cam_C)
            # o3d.visualization.draw_geometries([pcd, center_sphere])
            frame_id = render_image_list[i].split('/')[-1].split('.')[0][-5:]
            o3d.io.write_point_cloud(
                os.path.join(data_dir, f"pcl-f{frame_id}.ply"), pcd)

Hi,
the first part of your question is more like a meshlab specific question.

For the second part, regarding the “shadow” it came from the fact that the camera is tries to measure the sky depth outside the window.
To avoid this you can clamp the depth sensing range with: depth_maximum_distance