Zed x mini-oculus 3

Hello, Purchased a zed x mini and a zed box from you… My task is to broadcast the video stream from the camera to oculus quest 3. I found your sample program GitHub - stereolabs/zed-oculus: ZED Viewer for Oculus Rift , but it is outdated, I can’t build the program in cmake, I can’t find paths to updated libraries. Also for zed x need other initparameters, because the stream will be taken from the broadcast, not from the camera connected via usb. The program should have a choice of these parameters: usb camera, broadcast or recorded svo file. Could you update the sample so that it fits Could you update the sample so that it fits the zed x - oculus 3 connections and fix the cmakeLists to build the program correctly? Well, or at least give some hints on what code fixes I should make? I think it will be useful for other users who are facing a similar task. Especially nowadays remote control of robots in a VR helmet with a first-person view is in high demand.

Hi,

Indeed we do not update this sample anymore. For AR or VR application we recommend using our Unity plugin now. You can find it here : GitHub - stereolabs/zed-unity: ZED SDK Unity plugin

I don’t have a task to develop an AR application. I just need to watch the video stream from the zed x camera in the oculus 3 helmet. Preferably with the possibility of remote viewing, of course. Is the old example suitable for this task if you edit init_parametrs in it? Ideally, I need to transfer the video stream to WebXR so that it can be viewed remotely from various headsets

I can’t test it right now as I don’t have access to a Oculus headset, but I’m not sure you need to change the init parameters to use a ZED X camera, except if the input is a stream.

In this case, they are streaming, because the zed x mini has an analog output, it connects to the zed box, from which there is already a stream to the laptop - and this stream should be made as incoming parameters

Yes,

To use a stream of a zed camera as input of the ZED SDK, you need to specify it in the init parameters using this line :

init_parameters.input.setFromStream("127.0.0.1", 30000); 

Stereolabs Support

Thank you, so my initial assumption was correct, I still need to think about how to make a menu for selecting an incoming data source in order to use the same program to view recorded SVO files and to view the stream of an old zed connected via usb.

You can reuse this function available in most of our more recent code samples that parses the command line arguments to choose between the input modes.

Thank you very much, just what I need.

One more question… When using a stream with zed x as an incoming stream for the oculus helmet, will depth data be used?

The depth data is not streamed, only the side by side RGB image is. Then, the depth data can be processed on the machine that receive the stream by the ZED SDK.

Stereolabs Support

That is, if I turn on the depth data for the old sample, after processing on the computer, the video will already be fed into the helmet taking into account the depth data? This is a key point, because without depth data, the image in the helmet turns out to be distorted and severe dizziness occurs when viewing. I would like to achieve a result corresponding to the end-to-end mode of oculus 3. I am attaching a modification of the code that the neural network gave me to include depth data in the image and select the source of incoming data. Will the code work with this modification?
zed-vr-modified.txt (6.4 KB)

No, this sample is made to display the image of the ZED in an Oculus headset, nothing more. You can’t “include” the depth data into the image like you said.
The closest thing you might be able to do is sending the point cloud instead of the image but that’s not supported at all with this sample.

So why is the depth data not included in the example to display the image in the oculus helmet? Without depth data, the display is distorted and dizzy when viewed. Are there any examples for a full-fledged display in an oculus helmet with depth data?

I don’t understand how the depth information will undistort the image but will allow you to integrate AR object in the images by occluding the virtual objects that are behind “real” objects.
This is implemented in our Unity plugin : Build your first AR/MR App with Unity - Stereolabs

I don’t need to develop AR applications, I just want to get a video with depth data without virtual objects to view it in a helmet, just as it is implemented with oculus 3 end-to-end cameras. In the end-to-end oculus 3 mode, I see my room and surroundings through the helmet cameras without distortion, because there the video from the cameras is combined with depth data. I need to do the same, only to use zed instead of through cameras. Will this example help me to implement just a video with depth data without virtual objects so that I can view it in a helmet?

I’m sorry I don’t understand how the depth data will affect the image distortion.

Stereolabs Support

They have a very strong influence. I watched the video from the old zed in oculus 2 using the sample where depth data is not used - my head really gets dizzy when moving and the dimensions of objects are greatly distorted. While in Passthrough oculus mode, where built-in cameras are used (although it is black and white), you can safely move without dizziness and the size of the objects is realistic. In the Passthrough mode of oculus 3, everything is implemented even better. It took me a long time to figure out why there was such a difference, until I discovered that the sample did not support depth data… I need to implement the same Passthrough mode with depth data, only using zed cameras. Well, as I understand it, it’s easier to do this using the zed Unity plugin. After all, I’m not a programmer and I don’t understand code well - but here’s a ready-made example.