From other online sources, it seems this error typically happens when two systems are trying to use TensorRT, like using pycuda and torch indenpendently. However, we only have YoloV4 using TensorRT
The TensorRT warning (in the image below) should be a false positive. We never transferred a .engine cache, and the warning is known to be faulty.
Attempting to create a cuda context after the existing handle is broken, creates a different error: [hardwareContext.cpp::configure::92] Error Code 1: Cudnn (CUDNN_STATUS_MAPPING_ERROR)
If you’re not using NEURAL depth nor any models of the Object detection module, there’s no TensorRT in use in the ZED SDK.
Are you sure this is not a CUDA context issue or maybe a stream issue?
How did you manage the CUDA context? You can either create it, then give it to the ZED SDK (InitParameters.sdk_cuda_ctx) or let the ZED SDK creates one then you should not call any cudaSetDevice or equivalent, except if you’re using multiple threads.
AttributeError: 'pyzed.sl.InitParameters' object has no attribute 'sdk_cuda_ctx'
# or
TypeError: __cinit__() got an unexpected keyword argument 'sdk_cuda_ctx_'