Global Localization Playback Mode - GNSS Ingest Error and Termination (Initial X not PSD)

Hi Team,

I am currently running the Global Localization module in playback mode with inference enabled, and after a certain amount of time, the process terminates unexpectedly with the following error:

Ingest error occurred when ingesting GNSSData:  GNSS DATA COVARIANCE MUST VARY
[2025-04-15 13:26:22 UTC][ZED][WARNING] An internal error occurred while performing geotracking fusion for [GeoTracking]. Please verify your input GNSS data. Restarting GNSS and VIO calibration...
terminate called after throwing an instance of 'Bayesian_filter::Numeric_exception'
  what():  Initial X not PSD

This appears to happen when the system tries to fuse GNSS data, and it fails due to invalid covariance or non-varying values. A restart of GNSS and VIO calibration is triggered, followed by a crash with the “Initial X not PSD” error.

Let me know if any logs or data samples are needed for further investigation.

Thanks in advance!

Hi @karthikreddy157
are you using fixed values for the GNSS Data covariance?

Hi @Myzhar

Yes, Because of it these issue happening ?

Yes, that’s exactly what this error message means:
Ingest error occurred when ingesting GNSSData: GNSS DATA COVARIANCE MUST VARY

We are investigating the cause of the crash, this should not happen.

Hi @Myzhar
Thanks for the clarification earlier.

I understand that using fixed GNSS data covariance values can lead to the error:
“GNSS DATA COVARIANCE MUST VARY”, and potentially trigger a crash due to “Initial X not PSD”.

However, I have a doubt. I’m using a ZED Box with the internal GPS receiver connected to a u-blox GNSS antenna, but without any NTRIP subscription. In this setup, I do receive GPS signals with the status “3D fix”, and I’m also getting eph and epv values.

When I use these eph and epv values to build the GNSS data covariance and ingest it, the get_geo_pose() output from the fusion module seems to be incorrect.

But if I hardcode the covariance as:

0.002, 0, 0,  
0, 0.002, 0,  
0, 0, 2.0

then the get_geo_pose() output appears to be accurate — I’ve verified this multiple times.

Also worth noting:

When I use the same hardcoded covariance values in live mode, the fusion doesn’t raise any issue and continues to work correctly.So I have a few questions:

  1. Does the gpsdclient provide accurate epv and eph values even without an NTRIP correction service?
  2. Are there any unit conversions I should apply to epv and eph before using them for covariance?
  3. Is there anything I might be missing in the GNSS data preparation or fusion pipeline?

Note: I’m using your Global Localization example from GitHub without any modifications.

Thanks again for your support!

Best regards,
Karthik

Hello @karthikreddy157,
Thanks for your message ! What is your SDK version / zedbox model used to launch your sample ? Is it possible to share SVO2 with us (in order to reproduce) ?

To answer to your question:

Does the gpsdclient provide accurate epv and eph values even without an NTRIP correction service?

This is sometime an issue. In our tests we figure out that sometime, gpsdclient seems to under-estimate the covariance of the GNSS signal. Especially in canyoning scenario. In that case applying a scale to the covariance might be an option.

Are there any unit conversions I should apply to epv and eph before using them for covariance?

Normally no.

Is there anything I might be missing in the GNSS data preparation or fusion pipeline?

If you are using our sample the GNSS data preparation should be already implemented.

Hi @TanguyHardelin
Thanks for your response!

  • SDK Version: 4.2.5
  • ZED Box Model: GTW-ONX1F563O6C
  • SVO2 Sample File:
https://drive.google.com/file/d/1agzO9fREMuPawLhEHryD14_iPxAzUPOn/view?usp=sharing
  • JSON file:
https://drive.google.com/file/d/1rgLwBEqeaJf80LrtRWhNrCw9TKMK6CHE/view?usp=sharing

Please let me know if you need any additional information or configurations from our side to help reproduce the issue.

Also, could you please let us know the maximum vehicle speed we can maintain to achieve good sensor fusion accuracy, assuming a camera frame rate of 60 FPS?

You can view the visualized fusion results here:

on original covariance

on hardcoded covariance

Looking forward to your insights.

Best regards,
@karthikreddy157

Thank you for sharing the details. I wasn’t able to reproduce the issue you described, but I can provide some guidelines based on the information you’ve shared.

For the type of environment present in your SVO, I recommend using the GEN_1 positional tracking mode with the NEURAL depth mode. This configuration will be better suited than GEN_2, especially for higher speeds.

Your hard-coded covariance seems quite low. I understand your intention: setting the GNSS data with low covariance will produce a fused path that closely follows the GNSS signal. While this approach can work, I suggest using a higher covariance (something like centimeter-level covariance would be ideal). If you proceed with this, be mindful of GNSS edge cases, such as canyoning, which are common in your use case.

Based on these guidelines, your issue should be resolved. Could you confirm if this helps?

Hi @TanguyHardelin
Thanks again for your guidance

As you suggested i have set my mode to NEURAL but while ingesting GNSS to zed i am getting INVALID TIMESTAMP issue apart from any Non performance depth mode iam getting INVALID TIMESTAMP but As u informed in the difference is expected to be less than 4 seconds i have verified difference betweeb GNSS Time and sl.getCurrentTimeStamp() it is less than a second still iam getting INVALID TIMESTAMP.

Observations:

  • The GNSS timestamp and the sl.getCurrentTimeStamp() are less than 1 second apart, which is well within the <4s threshold mentioned in the docs.
  • The issue disappears when switching to a performance depth mode (e.g., PERFORMANCE).
  • The error only occurs in NEURAL and other non-performance depth modes, which are necessary for our use case due to the scene’s complexity and speed.

CODE

LOGS

Additional Notes:

  • As you suggested earlier, we are actively working on increasing the GNSS covariance to assist with fusion stability.
  • The INVALID_TIMESTAMP issue appears tightly linked to NEURAL mode — possibly due to an internal delay or different sync behavior compared to PERFORMANCE mode.

Questions:

  1. Could NEURAL depth mode introduce a delay that causes GNSS timestamps to be misaligned internally?
  2. Is there a known workaround or recommended approach to ensure reliable GNSS fusion when using NEURAL mode?

In the meantime, if there’s any solution or workaround available for the INVALID_TIMESTAMP issue specifically, it would be extremely helpful to keep our pipeline moving forward.

Thanks in advance for your support!

Hi @karthikreddy157,
Indeed this is strange. This might be due to high loading on your jetson. Do you run another program / tool in parallel of the SDK ? Is the issue still present in C++ sample ?

Based on your data, your timestamp seems correct. If your issue comes from some additional runtime linked to NEURAL I might have workaround for you:

  1. If you can switch to 5.0 we released a new NEURAL mode called NEURAL_LIGHT that provide better balance between runtime and accuracy than PERFORMANCE.
  2. If you can’t switch to 5.0, try to change the depth mode to QUALITY or ULTRA. It should provide good balance between speed and accuracy for GEN_1.

Regards,
Tanguy

Hi @TanguyHardelin
Thanks for your response.

We have tested the 5.0 release earlier; however, in this version, the default depth mode is NEURAL, which unfortunately results in the same Invalid Timestamp error.

As mentioned earlier, in version 4.2, we encounter the Invalid Timestamp error in all depth modes except PERFORMANCE.

To address this concern, we tested the issue using only the live example shared on GitHub, with no other processes running in parallel. Also, for context, we’re using the Python SDK, as our entire solution architecture is built around Python.

Additionally, I have tested the same scenario using the C++ live example from the GitHub repository, and it works correctly without the timestamp issue. Based on this, I suspect the issue might be specific to the Python.

Let me know if there’s anything else we can try.

Best regards,
Karthik