Can we do some particular function by using zed2?

In the case of 3d Human skeleton extraction (D: Pyroch_Learning zed-sdk master zed-sdk master body tracking body tracking python), it is completed by calling sdk. Whether we can work in the following way: we design a two-dimensional Human skeleton extraction network to get the key points, and then obtain the coordinates of the key points extracted by ourselves through the depth module. If possible, how should it be implemented and which functions are involved


you should be able to yse something close to what we show in the depth sensing tutorial:

replace the point cloud by the depth and set you keypoint coordinates as x and y and you should have what you want.

Hi,thank you!
There is still a question.By zed2,we get a image from left view and right view,and finally the we only get one point cloud.I mean how can we connect the point of left or right view with the point in point cloud?

The data at index (u,v) of the point cloud in the sl::Mat corresponds to the data at index (u,v) of the sl::Mat with left color pixels.

Can I understand it in the following way ?For example, I have a point ( 100,200 ) in the left view, and I can obtain the three-dimensional coordinates of the point from the point cloud by the way
point3D = point _ cloud.get _ value ( 100,200 )

1 Like

Yes, you understood correctly