Hi there,
I basically want to apply filters like inverted colors, edge detection and more (maybe slow mo?) to the real world.
I’m thinking of using a vr headset as the monitor, with cameras to capture the outside world, and a program in the middle to do the video processing.
Best example I know is this one.
Sadly it is vastly outdated and the links are dead.
This other example gives a fair amount of information about feeding the video stream from the ZED cameras to an oculus monitor.
Questions :
-
I’m not sure what hardware setup would be best suited for the task. A standalone headset would be great, but I doubt using a raspberry pi would be powerful enough for the video processing. Maybe I can start with a wired setup and figure out something later.
What headset would you recommend ? I know VR headsets aren’t the only option since some fpv goggles offer hdmi input and other interesting features. -
How much control do I have over the video using the ZED SDK ?
Can I for instance increase the RGB values by 10 for each pixel of the received video ? Can I use multithreading for efficiency?
Bonus questions to get better intuition about the software :
Can I process video from each camera individually / How do I resync the output videos in this case ?
Can I use 4 cameras instead of two ? Displaying 4 images on the monitor for instance.
Many thanks in advance