for obj in objects.object_list:
if obj.tracking_state == sl.OBJECT_TRACKING_STATE.OK:
print(obj.mask.is_init())
print(obj.mask.get_data())
However, the result is:
obj.mask.is_init() always returns False
obj.mask.get_data() always returns []
This post seems to suggest that segmentation masks are available with custom object detectors (e.g. YOLOv8) via the ZED SDK, however this post suggests they aren’t available.
Currently, the ZED SDK only supports ingesting custom bounding boxes from a custom object detector model.
As Object Detection models are increasingly supporting segmentation masks, we are currently working on supporting custom instance segmentation masks in the ZED SDK to add our 3D tracking capabilities. This is planned for future versions of the SDK.
Both posts you’ve linked are correct: the SDK can ingest custom bounding boxes, and if segmentation is enabled, the SDK computes a segmentation mask based on the geometry of the object (not with AI).
With the errors you’ve noticed, we have indeed detected an issue in the return of the masks, and are preparing a fix for a patch of SDK.
Indeed this is a limitation of the custom object detection segmentation. As we do not have additional information on the objects, it is difficult to refine the mask of the object in a generic way.
For comparison in the ZED SDK Object Detection classes which are known, you can retrieve the segmentation masks, and this is performed using the geometry of the object, a.k.a its point cloud information in 3D.
So in your case, ingesting custom segmentation masks is probably more interesting, this is a feature in our roadmap short-term.
Brilliant, I didn’t spot that, I can see the model I have working in the python example there, but if I load the same model into the Unreal Plugin I don’t see the same masks generated. Since I don’t see any mention “ingestCustomBoxObjects” or “ingestCustomMaskObjects” in the Unreal Plugin I’m guessing this isn’t implemented there?
That’s useful thanks, I did look through USlCameraProxy to see if this was a simple fix but it looks like you need to run the object detection algorithm externally to the ZED SDK and then feed those objects in, rather than being able to simply enable this using the onnx that is selected in the Unreal Plugin?