Stream Video to non-CUDA device

I’m using the ZED2 connected to a Jetson Nano on an UAV. All processing is done on the Jetson Nano because it should work without any ground station.
For debugging, I want to view the image on my laptop. Unfortunately my laptop does not have a NVidia GPU so I can’t use the Local Streaming feature.
This post also explains that the local stream has a proprietary format so opening it in VLC or another application won’t work.
Of course I can retrieve the images manually, encode and send them somehow, but if the ZED SDK encodes the images for recording anyway, can’t it also stream them in a format that a web browser or VLC understands? I already have a solution for the sensor data so I only need the video.
I already searcht google and this forum for similar questions, but I didn’t find anything. That surprises me a bit because I think my use case isn’t that exotic.

EDIT: The jetson and the laptop are connected by WiFi.

Hi,

You might be interested in the following repository :

Especially, it shows how to create a RTSP server with gstreamer to stream the left video (or right or something else) and you can then access it with VLC for example.

/* OB */

Hi, thanks for your quick reply. Can I use the GStreamer source plugin and the C++ API at the same time?

Hi there! I have the same question. I would like to stream only the video from left& right camera and request via http. But I couldn’t find the camera url of ZED mini… The stereo images will be received and processed by a stand alone VR device built with Unity 3D. This should be possible as I would only like to have the AR environment without any AR post processing. But the current ZED SDK and ZED Unity plugin makes it very difficult to retrieve those stereo images and used it somewhere lese like Unity.

I asked two months ago at Zed Unity Plugin github issue but didn’t get a response. Hope that someone here could shed some light on this issue.

Yes, this is what is done here :

Gstreamer and ZED SDK are used in the same time

1 Like

@keli95566 What you need to do is to create your side-by-side image (Left + Right) and then send it on the network in RTSP or something else (For latency, RTSP is better than HTTP )

Same, zed-gstreamer repository can generate this stream. Look at the repo here :

@obraun-sl Thanks again for your reply. This code looks exactly like what I want. I will try to integrate it into my project when I have time.

1 Like

Thank you! I will test it these days and see if it works,

Hi there, an update. I managed to stream ZED data to a standalone VR device running on Armv7 and OpenGLES3, however, I saw huge jittering artifact that randomly occurs, as the gif shows.

02_android_first_working_version_jittering

I don’t have a computer graphics background, and couldn’t understand why this occurs. This artifact only occurs when running on the VR Android device, not on Unity Windows Editor. Is it because of the video decoding is somehow wrong, or is it because of the VR headset hardware? Would really appreciate if anyone could kindly help!!

I could use help with this question as well.

I am able to get gstreamer to work from the command line using the various examples, but when it is running I can not open the zed camera in my code and control depth mode to do spatial mapping using zed SDK.

Does anyone have instructions or a code sample of how to open the zed camera using zed sdk, and then get gstreamer RTP stream of a single camera feed (left or right). Maybe put another way, since I am already calling grab() and retrieveImage() in my code, can I push this to a gstreamer pipeline somehow?

Thank you

GB

Hi @grahambriggs
it is not possible to call the open function for the same camera from two different processes.
The only possible solution is that one of the processes starts a ZED stream and the other connects to that stream.

@keli95566 Do you have any updates on streaming to HMD?
I’m also working on a similar project, but still unsure about the first steps.
Thank you

Hi there! Yes, I figured it out in the end. In the case that you need to build a standalone application to HMD, the process is somehow complicated. For me I am using Gstreamer and the Unity game engine to build the application. First, you need to build Unity-Gstreamer native plugin to android .so files (GitHub - keli95566/gstreamerUnityPluginAndroid: This repository is forked from: https://github.com/mrayy/mrayGStreamerUnity with the updated steps on how to build the original mrayGstreamerUnity c++ plugins for Unity Android builds.) . Then Unity can use the Gstreamer plugin to receive the video content. To checkout the gstreamer pipeline I used, please see this repo: GitHub - keli95566/MRTunnelingPico: Implementation of the mixed reality tunneling effects on Pico Neo 3 Pro Eye . It has been about two years since I worked on this project, and I am not sure if there are other cleaner solutions than this.

Thank you for your kind reply! @keli95566

It seems like you have thoroughly searched for solutions, and I agree that the ones you mentioned are clear solutions.

Before I take a closer look at your Github, I have a few questions.

  1. Have you tried using OpenCV live streaming, as shown in these examles: https://medium.com/nerd-for-tech/live-streaming-using-opencv-c0ef28a5e497 orhttps://community.stereolabs.com/t/nvidia-agx-orin-python-example-of-zed-camera-stream-opencv-rtsp-out/1869?
  1. As you know, streaming requires no jittering and low latency.
    So, since I am a new to streaming (I am a robotics engineer :smiling_face_with_tear:),
    and It seems like you are using standalone VR, would it be better for me to decode videos on a PC or laptops with a GPU and then mirror them to VR for low latency?

Thank you again for your help :slight_smile: