What does camera.get_timestamp(sl.TIME_REFERENCE.IMAGE) represent on the ZedX cameras using a quad link capture card? I want to know, with the highest precision possible, when the end of the camera exposure occurred to properly calculate transforms and mask areas of the scene.
I’ve read Sensors Time Synchronization - Stereolabs and all the pages/posts on camera latency, but am still confused. Reading ZED Link Quad GPIO Triggering - Stereolabs “The output trigger signals are two square wave, with their rising edge synchronized with the end of the exposure phase of the CMOS sensors.” gives me optimism that the capture time can precisely be known. However, I’m unsure if that is exposed in the API or whether I need to create a setup involving the GPIO pins to know the exposure end time within a few milliseconds.
It’s the system time when the full sample has been received by the GMSL2 deserializer.
You can use this information to know the exact instant when the frame is ready.
The SDK cannot be aware of this information because there’s no link between the deserializer GPIO and the SDK.
The SDK cannot be aware of this information because there’s no link between the deserializer GPIO and the SDK.
Has there been any consideration around integrating this into the SDK? The setup I’m imagining is connecting the capture card GPIO pins to the Jetson’s GPIO pins. Would that allow for syncronizing the timestamps with the true exposure time? Is there any way to transmit this over the CSI port? Any information you can provide here is useful both in the context of potential future SDK features and just general knowledge as I may try to build this setup myself.
This is under investigation. I’m not sure it’s possible to send GPIO pin status over GMSL2 without introducing latency which will make this mechanism unuseful.
At 60FPS 1-2 frames of latency is 16-32ms of latency. At 30FPS 1-2 frames becomes 32-64ms of latency. It’s suspicious to me that a lower frame rate would result in a higher absolute latency.
This is under investigation. I’m not sure it’s possible to send GPIO pin status over GMSL2 without introducing latency which will make this mechanism unuseful.
I may be misunderstanding component names, but my goal isn’t to send anything over GMSL2. I want to connect the capture card pins to the Jetson’s GPIO pins. I would then poll on the Jetson for the voltage changes to get a more accurate timestamp. I’d pairwise match the nth frame capture to the timestamp of the nth voltage change.
Will the timestmap itself be delayed 1-2 frames on the image , or will the image be delivered -12 frames later but the timestamp is correct
Regading the 1-2 frame latency due to buffering - if its only buffreing then wehere does the stochasticity orginate from - int he sense that why is not not 1 frame
Overall given a timestamp of a image - is there anyway to back calculate the exact time the exposure ended for that image
Okay, in that case can you please clarify which of the following is the case with respcet to zed sdk
the image is timestampped at the moment it is captured (exposure time ends) but it is only avaible after 1-2 frmaes to be grabbed - essentially the timestamp is correct with repest to the image, and the overall frame (image + timestamp) is 1-2 frames stale/delayed
The image is timestamped after it is available in the memory. The image is avaible in memory only after the 1-2 frames. In that case the image timestamp is ahead of the actual time of capture of the image by 1-2 frames
Something else - If yes, please clarify how th efollwonig are realted to each other - a) end of exposure time (true image capture time) b) timestamp on the image c) when the frame is available to be grabbed
I second a lot of the questions that have been asked in this thread, and would like to revive it. Reading this, it is not clear to me exactly what the timestamps corresponding to TIME_REFERENCE::IMAGE mean.
From @Myzhar ‘s first comments it sounds like timestamps will lag the true “shutter closing time” by 1-2 frames. E.g. for a system running at 15 FPS, there will be a lag of about 66 to 133 ms (which is quite high IMO). In another comment, he writes that
The timestamp is assigned as soon as the image is available in the memory, to the only bias is caused by the data transfer time
which is apparently “very fast”. This section seems contradictory to me. Let’s say we have a GMSL system operating at 15 FPS. We have two events. Event A: the shutter closes. Event B: the image corresponding to event A is timestamped. What amount of “wall clock” time will pass between A and B? (I think this is a reformulation of @Parv-Maheshwari ‘s last question).
Secondly, I would like to revisit @robots idea of using the TRIG_OUT to match images to the “true capture” time. Did you ever implement this?
I’d pairwise match the nth frame capture to the timestamp of the nth voltage change.
I assume this would be implemented by using the grab()call to increment a frame counter, which would then be matched against a pulse counter–>timestamp map (generated via ISR). This would assume that the number of frames can indeed be accurately tracked through the SDK. @Myzhar how would this interact with dropped frames? Is getFrameDroppedCount()an exact measure?
Reposting what I wrote in an email to support @Myzhar .
My hardware platform is this: ZED X One GS, Carrier Mini, Orin NX 8GB, running at FHD resolution and 15 FPS. I need answers specific to this configuration.
I have a navigation application where I need to match images against the pulses produced by the SYNC_OUT port on the Carrier Mini.
I am confused by this and want to make sure I am not misinterpreting. Does the image timestamp correspond to the exposure time, or to the time at which the image is received at the deserializer?
Also, under the assumption that the timestamp corresponds to the deserializer reception:
what determines whether the timestamp delay is one or two frames?
Does it depend on resolution? Frame rate? Something else?
Is it deterministic, in the sense that one set of settings will always produce a 1 frame delay, whereas another will always produce a 2 frame delay?
Is the delay always either 1 OR 2 frames, or can it be anything in between?
Also regarding the “data transfer time” @Myzhar , what does this represent? The time from exposure to reception at the deserializer, or something else?
In our mail thread you also mentioned something about differences in how the ZED SDK works between USB and GMSL2 setups, can you elaborate?
The image timestamps correspond to the time the image is received at the host, which is also (~us) the time the exposure ends.
The delay is theoretically only 1 frame, but we are currently working on an issue to solve with SVGA and HD1080 resolution, where, randomly, the delay becomes 2 frames.
The 1-frame delay is due to buffering from the ISP stack on the Jetson host.
The USB3 cameras’ “delay” is caused by buffering internal to the USB3 controller driver.
The GMSL2 cameras’ “delay” is caused by buffering internal to the ISP stack running on the Jetson.
The image timestamps correspond to the time the image is received at the host, which is also (~us) the time the exposure ends.
This is very good news to me and good to know, thanks.
The delay is theoretically only 1 frame, but we are currently working on an issue to solve with SVGA and HD1080 resolution, where, randomly, the delay becomes 2 frames.
Does this mean that I can run HD1200 resolution on the ZED X One GS, and expect a consistent delay of one frame?