Hi Team,
I’m working on a real-time application that uses global localization with the depth neural mode from the ZED SDK. My goal is to geotag detected objects in real-time using the camera’s GNSS fusion capabilities (i.e., camera2Geo()
and geoToCamera()
).
Everything works fine when I use Performance depth mode. However, when switching to other depth modes like Quality or Ultra, the GNSS fusion fails during real-time operation with the error: INVALID TIMESTAMP
.
It’s worth noting that:
- The same input (video + GNSS data) works perfectly in playback mode, which suggests the issue lies in real-time processing timing.
- I faced a similar issue earlier during real-time object detection + fusion, which I resolved by moving inference to a separate thread.
Current Challenge: Now, since I plan to use camera2Geo()
and geoToCamera()
during inference, separating threads becomes tricky — I can’t split fusion and inference as easily. I’m guessing the change in depth mode causes a delay or desynchronization in frame timestamps, leading to GNSS ingestion failure.
My Questions:
- Is it safe and recommended to perform fusion in a separate thread from the main pipeline/inference, especially in real-time use cases?
- Can I manually control the fusion timestamp (i.e., send an older timestamp slightly offset from the current one to match frame timings)?
- Are there any best practices for synchronizing frame timestamps with GNSS timestamps when non-performance depth modes are used?
- Any known limitations or workarounds when using
camera2Geo()
and geoToCamera()
in such multithreaded environments?
Constraints:
- This is a real-time system: post-processing is not an option.
- GNSS fusion and object geo-tagging must happen live during inference.
Any guidance or suggestions are highly appreciated!
Thanks in advance!
Hello @karthikreddy157
Changing the depth mode from PERFORMANCE to another mode will indeed result in longer runtime. The INVALID_TIMESTAMP
error occurs when the difference between the current fusion timestamp and the GNSS data timestamp exceeds 4 seconds, or if the GNSS data timestamp is 0. It seems there might be a delay between your GNSS data and the fusion process. As you suggested, multi-threading could be a solution.
Additionally, we have released a new NEURAL depth mode in version 5.0 EA, which offers runtime performance comparable to PERFORMANCE mode but with significantly better accuracy.
Regarding your questions:
- The fusion process is designed to be thread-safe, so you can implement multi-threading without any issues.
- Unfortunately, this is not recommended. Introducing delays can lead to incorrect behavior in the fusion process, as precise timestamps are crucial.
- The SDK already includes a synchronizer. You can find an explanation of the synchronization process here. It should be straightforward to implement.
- Yes, these functions are designed to be thread-safe.
I hope this addresses your questions!
Regards, Tanguy
Hi @TanguyHardelin
Thanks for the reply
I’ve been testing the Global Localization example using ZED SDK 5.0 EA, and I noticed that it defaults to the NEURAL depth mode.
While using the ingest_gnss_data()
function, I’m encountering an INVALID TIMESTAMP error.
In my current setup, when I perform inference per frame, it introduces a delay of 40–300 milliseconds or more, depending on the frame. Because of this, the GNSS data being ingested may not align correctly with the current or next frame’s timestamp.
Questions:
- Is there a way to log or compare the timestamps that
ingest_gnss_data()
is expecting vs. what is being passed in?
- In a situation where inference introduces a variable delay, should I pass the next frame’s timestamp or the current system time when calling
ingest_gnss_data()
?
- As per the documentation,
zed.grab()
“grabs the latest images from the camera.” If inference takes one second, does that mean we lose all frames captured during that time? Or my understanding is wrong ? if not is there a way to get next frame in real time
Any guidance on handling this timing issue, especially in playback mode with inference, would be greatly appreciated.
Thanks again!
Hello @karthikreddy157,
You can access to the current fusion timestamp thank to getCurrentTimeStamp method. You can compare this with ts attribute of your GNSSData. As I said in my last reply, the difference is expected to be less than 4 seconds. In any case you must always give the true GNSS timestamp to the GNSSData. Otherwise you will degrade the fusion accuracy.
Your understanding of the grab is correct. This is why you should grab the camera very often. What is your grab frequency ?
Regards,
Tanguy