Measuring a custom objects width

hey i am trying to get a custom object width
i know the objects width in real life is 1.05 meters
the dimension 0 of the object returns 0.82
if i use the MEASURE.XYZRGBA data of x_0 and x_1 of the object and and calculate the euclidean distance between the 2 points i get a distance of 0.92.
how can i get the width of the object accurately? (its about 1.5 m from the camera)
also how come the distance between the 2 points is different then dimension 0 by such a large factor?
it might be important to note that the object is a non flat object, it is an umbrella.


What do you mean by

I guess the difference goes from the way you make your calculation, because we probably use the same data. About the accuracy, that depends on the model you chose, the resolution used, etc. Maybe you can share your code ?


Hey this is the code i use:
# init run parameters
input_type = sl.InputType()
init_params = sl.InitParameters(input_t=input_type, svo_real_time_mode=False)
init_params.depth_mode = sl.DEPTH_MODE.ULTRA
init_params.coordinate_units = sl.UNIT.METER
runtime = sl.RuntimeParameters()
runtime.confidence_threshold = 100
runtime.sensing_mode = sl.SENSING_MODE.STANDARD
cam = sl.Camera()
status =
positional_tracking_parameters = sl.PositionalTrackingParameters()
detection_parameters = sl.ObjectDetectionParameters()
detection_parameters.detection_model = sl.DETECTION_MODEL.CUSTOM_BOX_OBJECTS
detection_parameters.enable_tracking = True
detection_parameters.enable_mask_output = True
res = cam.grab(runtime)
# set bbox based on x_0,x_1,y_0,y_1, and calculate
bb_box = [np.array([[x_0, y_0], [x_1, y_0], [x_1, y_1], [x_0, y_1]])]
objects_df = pd.DataFrame({“unique_object_id”: [sl.generate_unique_id()],
“label”: [0],
“probability”: [1],
“bounding_box_2d”: bb_box,
“is_grounded”: [False]})
objects = get_objects(cam, objects_df)
print(objects[0].dimensions[0]) # 0.82
point_cloud = sl.Mat()
cam.retrieve_measure(point_cloud, sl.MEASURE.XYZRGBA)
point_cloud = point_cloud.get_data()
med_y = int((y_0 + y_1) / 2)
euclid_dist = np.sqrt(np.sum(np.power(point_cloud[med_y, x_0][:3] - point_cloud[med_y, x_1][:3], 2)))
print(euclid_dist) #0.92

ill try to emphasize my point: the objects width from zed calculations is different from the euclidean distance from the bounding box edges and both are different from the real world value, i am trying to understand how do you calculate the width of an object in general

Could you please you format your code ? It’s difficult to read without the indentation.


hey, there is no indentation in the code. everything starts on the same line.


Indeed. Thus, there is no grab loop, and you retrieve only one image from the camera, the very first one. You should definitely try to retrieve more images, and see if the distance is different after a few frames.

Also, I’m not sure your calculation of Euclidian distance is right. it’s 3D, and you are not using Z.


hey Antoine thanks for your answers
please notice that while calculating the Euclidian distance i use the first 3 columns of the point cloud so its X,Y,Z.
this code is paratial, i didn’t want to include unreverent stuff from the loop but there is a grab of new frames. when comparing each frame the bounding box dimensions differ from the Euclidian distances from the sides of the bounding box (as shown in the code)

what i’m trying to understand is how does zed calculate the width, due to the differences from the Euclidian distances measures. my assumption is that you take the distance to the center of the object and calculate width by pixel then using these two scalars to calculate width in mm.
This is in order to know if i could (and at what certainty) use the bounding box first dimension as a measure of width for round/non-flat objects

This is in order to know if i could (and at what certainty) use the bounding box first dimension as a measure of width for round/non-flat objects

This, you can do. At what certainty, you cannot know, but we may expose this kind of information in future releases.