Processing Ultra Depth Images Offline to Get Neural Depth

The above link seems to suggest that you can save depth data to post process if offline. However, the link refers to data format conversion from svo to mp4 or sequences of images. Is it possible to upgrade the quality of the depth data from quality mode to neural mode offline? If so, how?

Hi @kkoe2
when you use an SVO as input for the ZED SDK you can set the initial parameters in the same way you do when using a “real” camera connected to the USB3 port.

You can follow the examples available on Github:

Hello @Myzhar,
Thanks for the reply! I understand that you can treat the svo essentially as a real camera when you play it back or use the ros wrapper to launch it with an svo file. However, I’m wanting to run a task on a less powerful device and collect data on the zed2 in ultra mode. This allows for higher frame rates. Then, I would like to process that data offline on my laptop to get the neural depth quality at the same high frame rate. I’m not sure if I am following the examples right or if this clarification is helpful. The playback feature is about reading svo files and the export seems to just be changing the file format. Am I missing something? Thanks again in advance.

You can record the SVO by using ZED Explorer. The depth quality setting value while recording is not important because the SVO contains the raw synchronized left and right frames that will be processed by the ZED SDK in real-time each time you use the SVO.
When you want to process the SVO you simply set it as an input, for example in the ROS wrapper:

1 Like

Thanks that’s the file I needed to edit.

int depth_mode;
mNhNs.getParam(“depth/quality”, depth_mode);
if (strlen(mSvoFilepath.c_str()) != 0){
depth_mode = 4; // Setting depth to playback NEURAL
}

That’s the snippet of code I added to always playback svo recordings with Neural Depth

1 Like