ROI feature not working with Stream (4.1.x) and SVO2

Hello,

I’m currently testing the ROI feature for body tracking on an interactive project. I’m working with the Derivative team, who are integrating the latest SDK to make this feature available in TouchDesigner (TD).

So far, the results using a stream (Jetson SDK 4.1.3) have been inconsistent and unreliable.
However, when I test with a camera connected via USB, the results are consistent. I therefore wonder if the stream is the issue.
The setup for the USB camera is different, due to the narrow space and the fact that I’m the only person to be tracked. By contrast, the project covers a large area and involves many people, which is why the ROI feature is necessary.

A Jetson running SDK 4.1.3 streams to a host running the latest Touch Designer release (SDK 5.0.2).

Can you confirm ( again) that the ROI feature is supposed to work with a stream generated with the SDK 4.1.3 ?
This question has already been answered here : Streaming from Jetson SDK 4.1.2 To SDK 5.0 , any issue?
But I prefer to ask again as I’m spending so much time trying to figure out what is the issue.

It’s almost the same question with SVO2.
I ran some tests with SVO2 recorded with the 4.2 SDK, but the ROI feature didn’t work.

Are they supposed to be compatible with ROI?
I can’t test the SVO2 generated with the new 5.0.x SDK as it raises an error in TD for now, but this should be fixed soon.

Can the mask be composed of different area within the frame ?

About the logic of the detection, If a body is ¾ out of the ROI = no detection , ¾ in the ROI = it is detected.

Thanks

Hi,

The input (stream, live or svo) should not impact the ROI feature at all.
In your case, the ZED BOX is only capturing the images and streaming them to the computer. The body tracking and the ROI computation are done by the computer receiving the image stream.

However, as I said in your other post, this can impact the body tracking quality and might change how the ROI filtering behaves.

The current implementation is using the Bounding box and not the skeleton itself to filter detections outside the ROI. If more than 50% of the bounding box is outside the ROI, the detection is discarded. I’m not sure the current implementation will correctly handle multiple ROI in the same image.

Stereolabs Support

1 Like

Hey!

Thanks.

Is a stream generated with 4.xx compatible with a host running 5.xx anyway?

I will check with the Derivative team to see where the issue lies.

Blockquote
I’m not sure the current implementation will correctly handle multiple ROI in the same image.

I ran some tests with a USB cam and it sounds like the ROI works at least with two different areas in the frame

best

Hi,

A new streaming (and recording mode) has been added in 4.1, so you can have compatibility issues if you are streaming from a 4.0 to a 5.x. But with 4.1 or newer, it’s fully compatible.

Hi,
Thanks for clarifying that.
The Jetson is running SDK 4.1.2.

It should be compatible, but, as mentioned in this other topic, there are issues with the quality of streams generated by Jetson Nano 4.1.3: Quality degradation with a stream generated by a Jetson Nano (4.1.3)?.

The Jetson Nano is struggling to deliver an accurate feed. This explains all the issues I have encountered: ROI not working with the stream, Svo crashing the runtime, and random body tracking.

I tested with a ZED2i connected via USB, and all sources (camera, stream, and SVo) worked perfectly with the ROI (two areas in the frame).

I will ask the hardware integrator either to find a option with the fiber optic to use an USB 3 extender ( they pulled 400 m of cable… )
or to upgrade the Jetson to a more powerful one.

1 Like

There is some ongoing work to improve the reliability and stability of the camera streaming and recording. This should be improved in one of the next versions of the SDK as well.

You can also try to play with the streaming parameters, such as the bitrate (StreamingParameters Struct Reference | API Reference | Stereolabs)

Improving the reliability of the stream would be a great addition.

If I recall correctly, installing the SDK on the Jetson Nano (NVIDIA-Jetpack 4.6.5) was really difficult (due to a conflict with the NumPy version).

I doubt that I had the option to customise the stream; I could only enable the maximum performance mode.

Do you think a Jetson Orin NX could handle the task of providing a decent stream?

Best

Yes for sure.

I just realised you were using a Jetson nano? Is it a Orin Nano ? Because this specific model does not have a hardware encoder, you should not be able to stream the image with the ZED SDK.

Stereolabs Support

It is the previous generation Jetson Nano, not the Orin.

It is a NVIDIA Jetson Nano 4GB production module, packed by reComputer.

I asked about this on the forum back then.

SDK install option on Jetson for streaming.

No one raised a warning.

I think you are right; there is no hardware encoder.

However, it is currently capable of streaming, albeit not optimally.

The previous gen has a hardware encoder, but Nvidia removed it for the Orin, for some reason. That’s why you are able to stream.

Thanks for the clarification earlier.
Although the Jetson Nano (old generation) can stream using SDK 4.1.2 , it’s probably not an ideal choice for my project, experiencing inconsistent body tracking detection and the Region of Interest (ROI) feature isn’t functioning.

Regarding the Orin series, I’d like to confirm:
Are you saying that none of the Orin modules, including the Orin NX, have a hardware encoder?

If that’s the case, does that mean that the ZED Box, which is built around the Orin, is not meant to handle encoding tasks?
Is its main purpose instead to run inference using either the full SDK or the Fusion API?

Only the Orin Nano does not have hardware encoders; Orin NX or AGX do.

Stereolabs Support