What's the meaning of "optimization" in the context of the ZED AI modules

Hi there,

I’m using ZED X cameras.

ZED Diagnostic Tool

- Running ZED SDK Diagnostic : OK  
    ZED SDK version: 4.1.0
    CUDA version: V11.4.315
- Running Processor Diagnostic : OK  
    Processor:   ARMv8 Processor rev 1 (v8l)
    Motherboard:  NVIDIA Orin NX with Syslogic BRLA4NX Carrier, Unknown
Error: unable to open display stic : 50%
- Running Graphics Card Diagnostic : OK  
    Graphics card:  Orin
- Running Devices Diagnostic : OK  

I’ve been running through the AI model optimisation processes using the ZED_Diagnostic tool. I’ve searched the documentation and have a couple of questions:

  1. What exactly about the model is being optimised? Is it being optimised to suit the hardware of the system? Is model being optimised to be as small as possible on disk, or as fast as possible, or as accurate as possible, or something else?

  2. I’ve run the optimisation process a few times at various points in time, and now I have a few different versions of all the models - should I be deleting duplicate model versions?

    $ ls /usr/local/zed/resources
    
    neural_depth_3.6.model
    .neural_depth_3.6.model_optimized-fbcbl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-1-0e53-512
    .neural_depth_3.6.model_optimized-kcebl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-1-0e53-1024
    objects_accurate_3.2.model
    .objects_accurate_3.2.model_optimized-fbcbl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-10
    objects_medium_3.2.model
    .objects_medium_3.2.model_optimized-ebgbl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-10
    objects_performance_3.2.model
    .objects_performance_3.2.model_optimized-fbcbl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-10
    person_head_accurate_2.4.model
    .person_head_accurate_2.4.model_optimized-jgabl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-10
    person_head_performance_2.4.model
    .person_head_performance_2.4.model_optimized-idcbl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-10
    person_reid_1.4.model
    .person_reid_1.4.model_optimized-aaabl-8-87-11040-8600-8502-16-64-2048-48-164-512-8-10
    skeleton_body18_3.2.model
    .skeleton_body18_3.2.model_optimized-bgabl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-1-2-160
    .skeleton_body18_3.2.model_optimized-ccebl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-1-2-224
    .skeleton_body18_3.2.model_optimized-dfcbl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-1-2-352
    skeleton_body38_3.5.model
    .skeleton_body38_3.5.model_optimized-bjcbl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-1-2-192
    .skeleton_body38_3.5.model_optimized-dcabl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-1-2-320
    .skeleton_body38_3.5.model_optimized-eeibl-1-87-11040-8600-8502-16-64-2048-48-164-512-8-1-2-448
    
  3. Is there any way I can verify that the optimised AI module is being used? I’m trying out the ZED YOLOv8 custom detector but I didn’t notice much of a performance boost after optimising the DEPTH and OBJECT_DETECTION modules so I’m wondering if they’re actually being used.

I appreciate any insights!

Hi Andrew,

Let’s go in order,

What exactly about the model is being optimised? Is it being optimised to suit the hardware of the system? Is model being optimised to be as small as possible on disk, or as fast as possible, or as accurate as possible, or something else?

It’s being optimized to get the best performance with your hardware, accuracy and runtime-wise.

I’ve run the optimisation process a few times at various points in time, and now I have a few different versions of all the models - should I be deleting duplicate model versions?

These versions are different variants of the models for FAST, MEDIUM and ACCURATE (see the detection_model in the BodyTrackingParameters).
Theoretically, the models should not be re-optimized if they are already.
The only risk you take by deleting them is that you will need to re-optimize them.

    1. Is there any way I can verify that the optimised AI module is being used? I’m trying out the ZED YOLOv8 custom detector but I didn’t notice much of a performance boost after optimising the DEPTH and OBJECT_DETECTION modules so I’m wondering if they’re actually being used.*

To use the model at all, it needs to be optimized. If you set the depth to NEURAL or NEURAL_PLUS, it will use an (optimized) AI model.
To be clear, optimizing is not giving a boost to the system, it’s making it compatible at all, and “secondarily” it does so in the best way possible for your hardware.

I’ll add that to ensure the optimization runs the best, avoid using GPU-related tasks when optimizing the models.