Zed camera processing

Hi,
I am trying to understand where does the processing for person detection, point cloud generation and other stuff happen. Does camera have on board processing if not is there a way to run only certain features. Like I only need person detection with depth and position of that person. Don’t need skeleton or any other stuff if they are not necessary.
I want to minimize usage of onboard computing by not running the stuff I don’t need.
System: Ubuntu20
Camera: zed2i
SDK: 3.7.
ROS2
GPU: GTX1050
Thanks,

The ZED cameras do not process data onboard. Only frame synchronization, camera controls, and sensors data acquisition and synchronization are performed onboard.
The ZED SDK performs all the remaining elaborations starting from depth estimation.
To minimize the onboard computing simply do not start the modules that you do not need.
The ROS2 wrapper that you are using, for example, enables the SDK modules only if required by user parameters.

Hi @Myzhar,
I am seeing decreased in performance of person detection after a while. When I launch the node reaction time is quick but it decreased as the time passes by. FYI I am not running any other nodes or processes.

Are you using your own code or a sample of ours?

sample of yours
running two launch files
1 - zed2i.launch.py
2 - display**.launch.py

Are you running both simultaneously?
Rviz can slow down the system if you execute it on the same machine where you run the ZED node.
How are you measuring the slowing down effect?

Yes, they are on the same machine.
The video feed on Rviz seems to be slowing down you can see frames get chopped after you run it for a while. (chopped means the time between each frame is higher)

OK, that’s a Rviz problem.
Rviz takes a lot of resources, so when you run it on the same machine where the ZED nodes run and your machine is not very powerful you start experiencing slowing down.
I suggest you decrease resolution and framerate or you get another machine where you run Rviz remotely.

1 Like