Acquiring Segmentation Mask from Custom Object Detection Model

I am trying to implement a yolov8 object detection model from Ultralytics and have encountered issues regarding the use of applying segmentation mask on detected objects based on this given sample. Currently, bounding boxes could be displayed but the mask is not displayed even when setting as such: detection_parameters.enable_segmentation = true. Furthermore, using the code below to obtain the segmentation mask height only outputs 0.

Note: I have edited the code such that ROS2 is built around the sample code provided

for (size_t j = 0; j < detections.size(); j++)
{
  if (objects.object_list.size() > 0)
  {
    RCLCPP_INFO_STREAM(LOGGER, ("obj_id: " + std::to_string((int) detections[j].label)));
    object_mask = objects.object_list[j].mask;
    RCLCPP_INFO_STREAM(LOGGER, ("obj_height: " + std::to_string(object_mask.getHeight())));
  }
}

Hence, I thought that an instance segmentation model (Yolov8n-seg model) is required which led me to generate a .engine file but it could not get past the TensorRT engine initialization and crashed as shown below.

image

Therefore, what is the solution to obtain the segmentation mask and its related data through C++?

Hi,

Our custom OD ingests the detections from an external detector and add tracking to them. It does not ingest the segmentation masks.You can always use them in your program but the SDK not help you with them.

So it means I have to externally include an algorithm for segmentation mask. If I do so, will I still be able to utilize the SDK to obtain the 3D position of the object based on the segmentation mask? Because as of now, based on the codes, what I can infer from is that the 3D position of the object is obtained based on the 3D bounding box, but it feels like I am misunderstanding something here since point cloud is involved in the sample.

Hi,

The 3D positions are obtained from the bounding box indeed. The SDK will not use the segmentation masks.