Hello, I’m working on a project to detect road obstacles and surface defects in a mining quarry. I chose to use a combination of ZED 2i and Jetson Nano, but I’m having trouble configuring and running the necessary services (YOLO + SLAM + IMU) so that they work together and send the output to a cloud server.
I’m looking for help or consultation both in choosing the right libraries and in configuring the ZED 2i + Jetson Nano setup to ensure that the following processing pipeline works with sufficient performance:
The camera captures video and streams it to the Jetson Nano → SDK
The SDK integrates YOLO + SLAM + IMU to detect road defects in the background
Road section parameters (coordinates, timestamp, vehicle speed) + an image of the detected defect are transmitted to a cloud server
Hi,
It is necessary for the hardware to process data from the camera + IMU in the background, in parallel with the SLAM process. Other users of the system say that for such performance the camera and Jetson Nano need to be switched to a special operating mode, but I do not understand how exactly to do this physically.
Hi @aslevandovsky
the Jetson Nano is an old entry-level model with low compute power capability.
You can maximize the performance by setting the power mode to 0: sudo nvpmodel -m 0
and running the jetsonclocks script: sudo jetson_clocks.sh
Is it possible to contact you to ask more questions on this topic? I understand that the level of knowledge of my team is currently quite low and the questions may be too general and we need a person who will point the questions in the right direction.