From zed mini camera,we can get RGB(left image) data, depth data and ptcloud data.
First of all, what kind of numpy matrix do you convert these data into?
In zed-tensorflow, we use the following code transform the left(RGB image) into a unit8 numpy matrix:
def load_image_into_numpy_array(image):
ar = image.get_data()
ar = ar[:, :, 0:3]
(im_height, im_width, channels) = image.get_data().shape
return np.array(ar).reshape((im_height, im_width, 3)).astype(np.uint8)
What kind of numpy matrix can be converted from depth image and point cloud for further processing?
The second question is, can these three kinds of data correspond? I mean, after obtaining the numpy matrix of three kinds of data, how to obtain the depth data corresponding to a pixel in RGB and the three-dimensional position represented by the point cloud?