I am working on a project with a ZED2 camera. I need help with using mediapipe library hand detection module with ZED2 to extract all points that I get using depth estimation just for the hands and visualize it. can you tell me how can I extract the hand point cloud remove all background point clouds and merge mediapipe hand detection with the ZED2 camera?
Hello,
I can’t write this code for you. You should check out our Body tracking tutorial : zed-sdk/body tracking/body tracking at master · stereolabs/zed-sdk · GitHub
Using the mask
attribute of the BodyData
object will give you the pixels involved in a body tracking detection.
Make sure you use BODY_FORMAT_70
to get the detailled hands.
Thank You for the help. can you also help me with my another question :
How to extract depth values for particular [x ,y] coordinate depth_value = depth_map_np[y][x]. please look at the post.
The numpy array contains these values. Check this tutorial zed-sdk/image_capture.py at master · stereolabs/zed-sdk · GitHub which does what you need.
i am getting this output:
Depth at (1609,804): [ 3.0693591e-01 1.2300891e-01 5.0059068e-01 -2.3553920e+38] meters
need to understand the values and need a z-axis.
how do you get thoses values ?
To get the depth values you shoudl retrieve the MEASURE::DEPTH which is a one channel matrix containing Z values.
It looks like you display Point Cloud values from this sample:
In this case the Depth (Z) is the 3rd value. Its unit depends the InitParameters you set.