Best way to do real time video processing?

Hi there,
I basically want to apply filters like inverted colors, edge detection and more (maybe slow mo?) to the real world.
I’m thinking of using a vr headset as the monitor, with cameras to capture the outside world, and a program in the middle to do the video processing.

Best example I know is this one.
Sadly it is vastly outdated and the links are dead.

This other example gives a fair amount of information about feeding the video stream from the ZED cameras to an oculus monitor.

Questions :

  1. I’m not sure what hardware setup would be best suited for the task. A standalone headset would be great, but I doubt using a raspberry pi would be powerful enough for the video processing. Maybe I can start with a wired setup and figure out something later.
    What headset would you recommend ? I know VR headsets aren’t the only option since some fpv goggles offer hdmi input and other interesting features.

  2. How much control do I have over the video using the ZED SDK ?
    Can I for instance increase the RGB values by 10 for each pixel of the received video ? Can I use multithreading for efficiency?
    Bonus questions to get better intuition about the software :
    Can I process video from each camera individually / How do I resync the output videos in this case ?
    Can I use 4 cameras instead of two ? Displaying 4 images on the monitor for instance.

Many thanks in advance

Hi,

Indeed the blog you linked in very old and outdated. Now you can use the Unity plugin (https://github.com/stereolabs/zed-unity) for AR applications.

  1. It is very important to remember that the ZED SDK can only be installed on Windows/Linux computer with a NVidia GPU. Which means you can’t install you app on a Oculus Quest for example, the Headset has to be connected to the “host” computer (with the SDK installed).
    The Oculus Quest is probably one of the best Headsets out there but here you won’t be able to use it wirelessly.
  2. You can modify some Video settings for the SDK API (https://www.stereolabs.com/docs/api/group__Video__group.html#ga7bab4c6ca4fd971055eca1fdd9f4223d). Anything else has to be implemented.
    You can retrieve both left and right images from the same timestamp (that’s what is done in Unity). But modifying the images can indeed induce some computation latency.
    The current Unity plugin is made to display both left and right image (on the left and right screen of the headset). Changing that will require some work.

Best,
Benjamin Vallon

Stereolabs Support

Thanks for all the info.
Do you think using an old Oculus Dev Kit 2 would be doable, or will it cause to many retrocompatibility issues ?

I guess you can use it as long as it’s compatible with the Oculus Integration plugin.

Stereolabs Support