Docker, GL, Object Detection Model and the SDK

I’m trying to build a Docker container in which I can use the various ZED tools on Ubuntu 22.04. I’m up against a few conflicting issues:

  1. There doesn’t seem to be a GL enabled devel release that also works on 22.04. I think I saw a github issue where it was discussed that nividia hasn’t provided a base image that supports that, so StereoLabs doesn’t have anything to build on.

No big deal. I can base my own docker file on nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04 and install the ZED SDK over the top.

  1. The problem I’m having is getting ZED_SDK_Ubuntu22_cuda12.1_v4.1.3.zstd.run to install cleanly and fully via Docker.

Because Docker runs everything as root, my RUN line looks like this:
RUN ZED_SDK_Ubuntu22_cuda12.1_v4.1.3.zstd.run – silent

That leads to a mostly working install, however ZED_Diagnostics complains about a corrupt Object Detection model. In fact, /usr/local/zed/resources is empty.

If I create and use a non root user to run the installer by hand, the Object Detection model seems to lay down ok. However, it’s a very interactive process and not one that’s going to work in Docker.

I’ve also tried installing the ZED SDK by hand, copying off the contents of /usr/local/zed and then mounting it back. However, ZED Diagnostics isn’t cool with that and still complains.

How do I get a complete install (preferably as a root user) in Docker?

Hi @kadakadak,

Here are a few commands that may help you install the ZED SDK by hand in Docker:

export DEBIAN_FRONTEND=noninteractive
chmod +x $INSTALLER_ZED_FILE
sudo $INSTALLER_ZED_FILE -- silent skip_cuda skip_tools skip_python
sudo chown "${USER}" -R /usr/local/zed

If you wish to persist the AI models from one run to another to avoid re-optimizing them, I would suggest mounting the /usr/local/zed/resources directory.

Hope this helps.