Position Tracking feature count

Hi,
I was wondering if there’s a way to get the “number of features“ currently tracked by the feature tracker of the position tracker? That way, I can reset the position tracker when there are fewer features without accumulating error in the system

Thanks!

There is a way to perform this operation:
you can retrieve and count the positional tracking landmarks.

The zed-sdk GitHub repository provides a sample code that you can use for this purpose:

You can also find a similar example code in the ZED ROS2 Wrapper:

1 Like

Hi, thanks for the solution,
I added a logger - >

RCLCPP_INFO(get_logger(), "SLAM features: %zu total, %zu tracked",
            map_lm3d.size(), map_lm2d.size());

I was wondering what would be a good threshold for map_lm3d.size() and map_lm2d.size() to reset slam to avoid erroneous measurements

When the vehicle is very close to features, as seen in the image

The system accumulates angular error as seen in the next images
The pink line is the approximate real alignment of the asile, and the red points are the tracked points of the slam, which have some error as seen in the image

thanks!

This is interesting information.
Do you have an SVO recorded in the same conditions that we could use to improve the PT behaviors with similar conditions?

Unfortunately, I do not have this information. You can only evaluate it with tests on the field.

Hi,
I sent the SVO over the support email. Please do share your findings.

Thanks!

1 Like

Thank you. Really appreciated it.

1 Like

I have some more findings. I was trying to visualize the landmarks in the image. I see that in some images, it just tracks ceiling points. What could be the reason for that

Ideally, it should be like this imo

It looks like the motion blur is one of the causes of the reduced number of features
This condition is normally accentuated when the robot is rotating.

You could try to fix the exposure time to a low value to reduce the amount of blur effect.

Are you using a ZED 2i or a ZED X camera?

Hi, thanks for the response, currently I am using the Zed2i, I have ordered a ZedX as well to see if it helps (yet to get my hands on it), in the mean timeif you have any recommnedations of things I could try and how could I tune the exposure time please let me know!

Thanks!

since GEN_3 uses IMU, I would think IMU should be abe to help when the visual features go low while its turning, is there something I can do to improve IMU calibration or weightage?

Thanks!

This is true, but IMU data integration is prone to drift; this is why a full SLAM cannot be performed using only IMU sensors.

I agree, but since this thing lasts for a couple of frames, I’d think in a tightly coupled VIO system that should be ok, but I see your point. If I change the exposure settings, would that impact Slam’s performance?

A lower exposure time causes less blur, this helps the feature detection step.

But it would also reduce the light entering the sensor, right? Just wondering about the impact of that

Yes, you must find the right compromise. It depends on your environmental conditions.

1 Like

Hi,
any findings from the SVO? or is it caused just because of motion blur?

Thanks!

Can you share your settings? Are you using the default values or have you changed something?

sure, I just have minor changes. I have attached the files

common_stereo_5_2.yaml (22.0 KB)

zed2i_5_2.yaml (706 Bytes)

Thanks!

Hi,
I came across a great article on IMU preintegration (link below) and found it really insightful, especially how it can help during motion blur and brief featureless intervals. I’d really appreciate a deeper explanation of how ZED’s positional tracking uses IMU preintegration, and whether there are any potential improvements or enhancements worth exploring.
https://dongwonshin.vercel.app/blog/imu-preintegration-part1
Thanks in advance!

Unfortunately, this is undisclosed information.

In any case, you are free to test any third-party algorithm for sensor data fusion.
The positional tracking module of the ZED SDK is not mandatory. You can disable it and use your custom positional tracking processing.