Relationship between pointcloud and rgb image

My question is, does the point cloud correspond to the left view?

What I’m doing now is by detecting objects in the left view. Then find the XYZ information of the same position in the (7201280,4) point cloud numpy array according to the pixel position in the center of the object. Then this information is recognized as the position information of the object. Is that right?

RGB left images, Depth maps, Point Cloud matrices, and all the other available data maps are all registered, hence each (u,v) pixel of one map corresponds to the same (u,v) of the other maps.