Hi,
I’m working on a vision-based warehouse automation project using the ZED X Mini camera and ZED Box Mini (Orin NX). We’re developing a system that needs to detect and analyze objects like pallets and racks using a custom object detector (YOLOX) along with the ZED SDK’s depth sensing.
I have a few questions:
-
Is it possible to combine a custom 2D object detector (like YOLOX) with the ZED SDK depth data, for example, running detection on RGB frames and then querying the depth map for those bounding box regions?
-
For objects such as pallets or racks, could the depth API be used to infer details like:
-
surface irregularities (e.g., bumps or asymmetric loads on top of a pallet),
-
stacked pallets, or
-
rack occupancy / empty slots (using depth or volume estimation)?
-
-
Is there a recommended workflow or example for fusing custom object detection outputs with ZED depth data to extract spatial or volumetric insights?
Any guidance, references, or example pipelines on integrating a depth map with an external detector would be really helpful.
Thanks!
— M