Setup:
SDK: ZED_SDK_Tegra_L4T36.4_v5.0.4.zstd
HW: Jetson AGX Orin development kit
source file: /usr/local/zed/samples/camera control/python/
execution script: python camera_control.py
Jet Pack: 6.2
/usr/local/zed/samples/camera control/python/camera_control.py", line 75, in main
if err <= sl.ERROR_CODE.SUCCESS:
TypeError: ‘<=’ not supported between instances of ‘ERROR_CODE’ and ‘ERROR_CODE’
Note:
The same error can be observed in /usr/local/zed/samples/object detection/custom detector/python/pytorch_yolov8_seg/detector.py source line 193, in main_
if zed.grab(runtime_params) <= sl.ERROR_CODE.SUCCESS
It seems that we are missing some operators implementation (<, <=, > and >=) and they will be added in the next patch.
In the meantime, you can replace <= with ==
if err == sl.ERROR_CODE.SUCCESS:
Be aware that doing so (using a ==), your sample will not continue on WARNING-level grab results (negative ERROR_CODE). While actually, a WARNING-level grab could happen when the image degradation detected by enable_image_validity_check occur but the ZED SDK remains functional.
I’ll let you know once 5.0.5 gets released and that you can use <= instead
Please let us know if you encounter any other related issues or have further questions!
Thank you for the update.
The fix resolved the TypeError: ‘<=’ not supported between instances of ‘ERROR_CODE’ and ‘ERROR_CODE’.
However, the detector.py script execution freezes after:
SUCCESS
Network Initialized…
and nothing happens; no errors or warnings are displayed.
Pls. Note that, in the meantime, I’ve updated the Torch and TorchVision binaries, which execute object detection in under 30ms.
I’m in a rush now, and I will update the camera_control.py file later today.
Some good news, some new issues/recommendations were observed:
Good news : ZED SDK 5.0.5 resolves the TypeError: ‘<=’ not supported between instances of ‘ERROR_CODE’ and ‘ERROR_CODE’.
New issues:
Objects are detected but wrongly displayed in the widow.
There were occasions where opencv had to be uninstall and reinstalled in order to properly display the GUI. (I believe this is an old phenomena related to the Linux distributions).
Ultalytics and NVIDIA both recommend use of pip install more effective torch/torchvision binaries combination, which reduce detection in less than 30msec obviously faster zed app execution as of today of about 80-90msec taken by Yolo + ZED detector overhead, but the default numpy version 2.2.6 shall be downgraded to numpy 1.23.5. Related wheels are torch-2.5.0a0+872d972e41.nv24.08-cp310-cp310-linux_aarch64.whl
torchvision-0.20.0a0+afc54f7-cp310-cp310-linux_aarch64.whl
reference link: Quick Start Guide: NVIDIA Jetson with Ultralytics YOLO11
(detector.py script execution instructions) yoloV11 gets installed by default, the ZED README.md suggests yolov8s-seg.pt rather than for example: yolo11n-seg.pt.
Thank you for the feedback!
Nice to read that your first issue is solved
To better track follow-ups issues and help future users find solution to there problem (if you have more), it would be a good practice to create a new forum post
On the left you can see the 2D detection with the detected objects and the associated masks
On the right you can see the BEV of the tracked objects, for demo purposes. You can press I / O to zoom in and out in this view. Now that I’m looking at it, we didn’t write display the possible commands in this sample (we’ll add that in a following release)
Good to know! If you face the issue again, feel free to log the steps so that we can smooth the installation process for others. OpenCV 4.10 is known to have issue with the cv2.imshow but it’s the first time I read something about uninstalling and re-installing.
I’ve installed python -m pip install torch==2.5.0 torchvision==0.20.0 and had no conflicts with numpy 2.x. The command python -c "import numpy; print(numpy.__version__)" shows 2.2.6 and the sample runs without issue.
But if you want to use a combination of wheels requiring numpy 1.x and want to use pyzed, I guess a solution would be to downgrade your python version to 3.8, since our 3.8 wheels are compiled against numpy 1.x.
wrongly displayed: I was expected to see the CLASS ID or CLASS NAME of the detected object based on yolov8/11 pretrained dataset, as it was in all previous releases. The both releases “ZED sdk” and “Ultralytics YoloV8/11” are baselines for my development project in which from time-to-time I’m doing catch-ups test and verification of the baseline. BTW, I’m using the copy of sample folders at /usr/local/zed/samples… Please find a screen copy of my project. You can observe that in each detected object I’m using CLASS ID designated by “o” and distance to object designated by “d” and location of the object designated by “c”, where the data of CLASS ID and distance are provided by the modified detector.py design of ZED SDK releases. In the below particular screenshot there were detected CLASS IDs as following: 63 for laptop, 62 for tv, 66 for keyboard, 41 for cup and 0 for person as referenced in A list of all 80 YOLO classes and its index in JSON format. · GitHub
I can not remember the details of GTK-related error which was displayed in CLI , and was resolved by work-around uninstall/reinstall OpenCV. I will update in a new issue with all the related details when it will occur next time.
pip install ultralytics command also installs torch and torchvision binaries if they are not installed, and they are compatible with numpy 2.2.6 and they do not conflict with the current official SDK release. On the other hand, during the runtime execution, these binaries are less efficient (80msec vs 30msec) than the recommended ones by NVIDIA and Utralytics releases of JetPack 6.2 and YoloV8/11 for Jetson Orin sdk/modules. Please be noticed, In the efficiency of the runtime execution I rely on already optimised pre-trained dataset and torch/torvision binaries.
This may give some more clue where the issue is located.
I’ve re-flashed the board from scratch and used the released sample script in …/pytorch_yolov8/detector.py, which produced an expected result. I wander if the problem is related to the sdk sample code of the object segmentation detection script …/pytorch_yolov8_seg/detector.py. I have used ZED_SDK_Tegra_L4T36.4_v5.0.5.zstd.run
I understand now the remaining visualization issue that you are describing:
the pytorch_yolov8_seg displays:
the tracking ID
the name of the detected object (unknown since it’s a custom model and the ZED SDK has no knowledge about what it represents)
the distance
while the pytorch_yolov8 displays:
the class idx (an integer)
the distance
We’ll uniform that in the next release, by having as before the class idx + the distance and still add the track id when the tracking is enabled.
In the meantime, If you want to replace the UNKNOWN with class idx in the pytorch_yolov8_seg sample, you can replace str(obj.label) with f"class {obj.raw_label}" in pytorch_yolov8_seg/cv_viewer/tracking_viewer.py#L127.