I am using ZED X with ZED Orin Box. I am trying to use the object detection parameters to localize a person in the frames and then use the mask value given by the module to get the corresponding point cloud. The idea is this:
- Use the object detection parameters and localize the person
- Use the objects.mask() to get the mask value for the detected person.
- Overlay the mask on the depth map generated by the camera and filter out all noises and only get the point cloud data of the detected object.
- Use that point cloud to create a reconstructed mesh that can be used in a physics engine.
But the problem is that the mask generated by the object detection parameters is inconsistent and it unfortunately cuts the mask of the detected object making it difficult to overlay on the depth map.
Is there any way to make the mask more accurate? I also tried to use segmentation models like YOLO but in vain. Do guide me with the mask because it is a vital role in my reconstruction process. If you have any other ideas for reconstruction of a detected object via ZED X Camera do let me know.