i wanted to ask, how the SDK decides which of the datapoints from a detected object are used to store in the objection.position.x,y,z values.
When an object is detected it gets a 2D and 3D bounding Box within a few points from the pointcloud. Does the SDK calculate an avarage between all the points inside the 3D box or is it always the “middle” of the object? I need this information to reduce tolarance between a distance measurement between 2 detectet objects which i calculate usin euclidean distance and the x,y,z coordinates provided by the SDK.
Thank your very much!