ZED SDK v4.1 is out!

We are pleased to announce that ZED SDK v4.1 is available for download.

ZED SDK v4.1 is the first production-ready release of ZED SDK v4, bringing improvements and advanced novelties.

Release Notes

What’s New

This release of the ZED SDK is now a stable release of the major version 4. It brings significant stability and performance improvements. We’re introducing a new generation of the Positional Tracking module for more precise and robust localization. A new version of SVO is also available, recording full sensor data and supporting custom user data for ease of use.

4.1.0

New Features

Positional Tracking GEN2

  • Introduced a new Positional Tracking generation, GEN2. It can be enabled using PositionalTrackingParameters::mode = POSITIONAL_TRACKING_MODE::GEN2. It outputs more robust and precise trajectories compared to GEN1, leveraging IMU data at full frequency for better motion estimation. This version requires the new SVO2 data. A new warning ERROR_CODE::SENSORS_DATA_REQUIRED has been introduced to inform about missing high-frequency data when using the module. The state enum has been reworked for clarity on the module status; ODOMETRY_STATUS, SPATIAL_MEMORY_STATUS, and POSITIONAL_TRACKING_FUSION_STATUS are now available. It’s accessible using the new struct PositionalTrackingStatus.

Recording using SVO2

  • Introduced a new default SVO2 format, recording sensor data at full frequency rate.
  • Supports custom user data using new functions Camera::ingestDataIntoSVO alongside a new struct SVOData. The data can be retrieved when opening the file using Camera::retrieveSVOData. Each data stream is associated with a string key, with data points associated with timestamps for synchronous replay if needed. Camera::getSVODataKeys retrieves all custom data stream keys present in the current SVO2 file. This format is mandatory to use the new Positional Tracking Gen2 with IMU data for best accuracy. The older format can still be recorded by setting the environment variable ZED_SDK_SVO_VERSION to 1.

Improve Neural Depth

  • Introduced a new NEURAL+ mode providing higher quality than Neural with sharper edges and more accurate depth, though requiring more computation.
  • Improved NEURAL mode accuracy and confidence. This new model is more accurate with no runtime difference. The confidence has been reworked for accurate edges and minimized flying pixels. It also improves confidence estimation in challenging areas such as sky, hard illuminations, very close objects, etc., to remove false depth measurements reliably. Neural modes from previous versions can be accessed on demand.

Region of Interest per Module

  • Reworked the Region of Interest to be applied independently for each module instead of implicitly for all at once. This feature allows masking an irrelevant part of the frame to avoid polluting results with unnecessary data. Typical usage includes masking a vehicle part where the camera is mounted to improve positional tracking precision and irrelevant depth data for obstacle detection. It can also be used to detect objects or body skeletons on only some part of the frame. The feature is now applied to each module through a new MODULE enum, with defaults applying to all modules. Functions Camera::setRegionOfInterest and Camera::getRegionOfInterest can be used to set the ROI. Auto-detection mode can find the static part of the image automatically by moving the camera for some time. It can be enabled using Camera::startRegionOfInterestAutoDetection, with RegionOfInterestParameters for adjusting detection parameters.

Global Localization / Geo Tracking

  • Improved Global Localization module accuracy and robustness, compatible with the new Positional Tracking GEN2. It now better behaves across a wider range of settings, especially across GNSS signal precision, from very precise RTK to dead reckoning.
  • Supports and requires an antenna position relative to the camera, as well as the type of GNSS fix (2D, 3D, RTK FIX, etc.) to improve precision of the fusion. Samples have been improved for clarity, utilizing SVO2 custom data to record GNSS data, allowing a single recorded SVO2 file to contain all necessary data.

General

  • Improved ZED X image quality by reducing motion blur and optimizing ISP to limit exposure time, resulting in better image quality and positional tracking precision improvements. Installing the SDK will automatically install the ISP config override for all ZED X cameras.
  • Improved image validity check module on the ZED 2 and ZED 2i to detect rare cases of auto exposure getting stuck underexposing the left image.
  • Improved ZED X capture process for more stable opening of multiple cameras and an optimized color conversion workflow.
  • Reduced CPU usage and memory allocations in the ZED X capture process.
  • Improved depth minimum range to better estimate working range based on camera baseline, lens, and depth algorithm used.
  • Enhanced performance of Body Tracking models for better runtime performance and accuracy compared to version 4.0.
  • Fixed a random issue where the depth minimum value was clamped at 1.5m.

Fusion

  • Improved Body Tracking fusion synchronization process to better handle low or irregular framerate.
  • Enhanced body tracking multi-camera calibration process in ZED 360.
  • Fixed the depth unit returned by retrieveMeasure, ensuring the user-defined UNIT is applied.

Bug Fixes

  • Fixed AI performance regression for NVIDIA Pascal GPUs (GTX 1070, GTX 1080, etc.), correcting incorrect fp16 execution without hardware support, leading to poor performance compared to fp32.
  • Fixed grab recovery that lost camera video settings.
  • Fixed a crash occurring when calling the resetPositionalTracking function.
  • Corrected reported camera model when recording or streaming with ZED Explorer.
  • Fixed a crash when using Object detection masks that could get corrupted.

Tools

  • Improved tools UI.
  • Enhanced ZED Depth Viewer interface.
  • Improved ZED Explorer when using multiple cameras.

Samples

  • Improved the Global Localization samples.

Wrappers

Python

  • Improved Object Detection and Body Tracking parameters check to avoid passing incorrect arguments leading to invalid behavior (like hanging retrieves functions).
  • Added a new Concurrent Object Detection and Body Tracking sample to facilitate usage.
  • Fixed a crash occurring when calling the get_device_list function when no cameras were detected.
  • Fixed the function that checks if hardware sensors are available.

Documentation

  • Improved Docker documentation for ZED X.

Congratulations! Sounds like some very exciting improvements.

Would you have an estimate for when the ros2 wrapper will be updated to be compatible? Its currently failing to compile against the zed sdk v4.1 :frowning:

Thanks!

Great! Is there any update regarding 3D and the ZED X One ? I asked for an update to my related topic ( 3D from multiple ZED X One). We try to use fusion with multiple cameras and still struggle, but we need to use the X One to get a larger stereo-base … any hope there yet ?

Looks awesome! Thanks guys, installing now.

Couple questions,
No mention of NEURAL+ mode or Fusion object detection in this post?

NEURAL+ mode is mentioned on the main site ZED SDK 4.1 - Download | Stereolabs

fusion.retrieveObjects is mentioned here: zed-sdk/object detection/multi-camera/cpp/src/main.cpp at master · stereolabs/zed-sdk · GitHub but its not in the documentation

Forgot or was it pulled last minute?

@mars3 – The Neural Plus model shows up in ZED_Diagnostic (where you can download and optimize the models), and as sl::DEPTH_MODE::NEURAL_PLUS_DEPTH in include/sl/Camera.hpp header file… so I think its there!

1 Like

Hi,

Depth from ZED X one is available with this SDK version. The SDK uses 2 ZED X Ones as if it were a stereo camera, you code requires close to no changes.

2 Likes

NEURAL+ is available. Object detection fusion is not, the sample you mention was included by mistake.

1 Like

Coming soon. Next week with a high probability

Documentation and tutorials will be published in the next few days

1 Like

Post has been updated

Does the Unreal Plugin (UE 5.3) work with this?
Which version of the UE Plugin do I need?

Hi,
is there any release date estimation for Zed X drivers on jetpack 6.0 ?

Awesome!
Does that mean that in the .svo2 files IMU and GNSS data are already present and I do not longer need to save them in json files? If so are the guides already updated on how to handle these files?

The ZED X Driver for Jetpack 6 will be released as soon as NVIDIA provides a production release (it was expected in March). The current JP6 is a Developer Preview and we found issues that do not allow us to release a stable version of the ZED X Driver.

JSON is still used, but data are stored in the SVO instead of an external file.
You can find the example on GitHub.

2 Likes

Looks like ros2-zed-wrapper for 4.1 is out! Awesome!

" * GNSS Fusion temporarily disabled (available with 4.1.1)"
… so, I guess we need to wait for SDK 4.1.1?

Yes, a bug in SDK v4.1.0 does no allow to correctly initialize the global position

Hello, I had the zed-oculus application working, built according to your example at Stereo Video Passthrough for Oculus Rift | Stereolabs, so that I could view the ZED camera image instead of the through mode. It worked with the old ZED camera and the Oculus Quest 2 headset. Then, I acquired the Oculus Quest 3 and tested the old build. It worked with the new headset, but the window became narrower due to the different display size of the headset. Now, I have purchased the ZED X Mini and ZED Box Orin for the same purposes - broadcasting the stereo stream from the camera to the headset (both locally and remotely). I installed the new ZED SDK on my laptop. The zed-oculus program stopped working. Could you please tell me where I need to make changes in the code to adapt this program to the new devices?