I’m currently testing the ROI feature for body tracking on an interactive project. I’m working with the Derivative team, who are integrating the latest SDK to make this feature available in TouchDesigner (TD).
So far, the results using a stream (Jetson SDK 4.1.3) have been inconsistent and unreliable.
However, when I test with a camera connected via USB, the results are consistent. I therefore wonder if the stream is the issue.
The setup for the USB camera is different, due to the narrow space and the fact that I’m the only person to be tracked. By contrast, the project covers a large area and involves many people, which is why the ROI feature is necessary.
A Jetson running SDK 4.1.3 streams to a host running the latest Touch Designer release (SDK 5.0.2).
Can you confirm ( again) that the ROI feature is supposed to work with a stream generated with the SDK 4.1.3 ?
This question has already been answered here : Streaming from Jetson SDK 4.1.2 To SDK 5.0 , any issue?
But I prefer to ask again as I’m spending so much time trying to figure out what is the issue.
It’s almost the same question with SVO2.
I ran some tests with SVO2 recorded with the 4.2 SDK, but the ROI feature didn’t work.
Are they supposed to be compatible with ROI?
I can’t test the SVO2 generated with the new 5.0.x SDK as it raises an error in TD for now, but this should be fixed soon.
Can the mask be composed of different area within the frame ?
About the logic of the detection, If a body is ¾ out of the ROI = no detection , ¾ in the ROI = it is detected.
The input (stream, live or svo) should not impact the ROI feature at all.
In your case, the ZED BOX is only capturing the images and streaming them to the computer. The body tracking and the ROI computation are done by the computer receiving the image stream.
However, as I said in your other post, this can impact the body tracking quality and might change how the ROI filtering behaves.
The current implementation is using the Bounding box and not the skeleton itself to filter detections outside the ROI. If more than 50% of the bounding box is outside the ROI, the detection is discarded. I’m not sure the current implementation will correctly handle multiple ROI in the same image.
A new streaming (and recording mode) has been added in 4.1, so you can have compatibility issues if you are streaming from a 4.0 to a 5.x. But with 4.1 or newer, it’s fully compatible.
The Jetson Nano is struggling to deliver an accurate feed. This explains all the issues I have encountered: ROI not working with the stream, Svo crashing the runtime, and random body tracking.
I tested with a ZED2i connected via USB, and all sources (camera, stream, and SVo) worked perfectly with the ROI (two areas in the frame).
I will ask the hardware integrator either to find a option with the fiber optic to use an USB 3 extender ( they pulled 400 m of cable… )
or to upgrade the Jetson to a more powerful one.
There is some ongoing work to improve the reliability and stability of the camera streaming and recording. This should be improved in one of the next versions of the SDK as well.
I just realised you were using a Jetson nano? Is it a Orin Nano ? Because this specific model does not have a hardware encoder, you should not be able to stream the image with the ZED SDK.
Thanks for the clarification earlier.
Although the Jetson Nano (old generation) can stream using SDK 4.1.2 , it’s probably not an ideal choice for my project, experiencing inconsistent body tracking detection and the Region of Interest (ROI) feature isn’t functioning.
Regarding the Orin series, I’d like to confirm:
Are you saying that none of the Orin modules, including the Orin NX, have a hardware encoder?
If that’s the case, does that mean that the ZED Box, which is built around the Orin, is not meant to handle encoding tasks?
Is its main purpose instead to run inference using either the full SDK or the Fusion API?