Very low recording frame rate

Hi,
I am working with a Syslogic RPC RS A4NA (that has a Jetson Orin Nano and a Apacer NVMe SSD of sequential write speed 1115 Mb/s) with L4T R35.4.1, a single ZED X camera connected by a 50cm GMSL2 cable, and ZED SDK 5.0.0. My power mode is the maximum one (0, with 10W) and I am running jetson_clocks.

I can visualize the video and depth of the camera using the ZED tools properly with a frame rate of up to 60fps for the video (whatever the resolution) and 8-10Hz for the depth data, as displayed on ZED_Explorer/ZED_Depth_Viewer.

However, when I try to record the data, the frame rate drops drastically. I have tried using ROS2 and native ZED tools or samples. My end goal is to record ROS2 bags with RGB images (video), depth map, point cloud and IMU at least.

Apparently, the Jetson Orin Nano does not allow software H264 encoding in real-time, so I cannot record svo files with compression.
With ZED_Explorer or with the recording/recording/mono/cpp sample (after having modified the code to change the compression mode to “losseless”), whenever I record a svo file, it is then seen as corrupted or invalid by all ZED tools, and I have to use the ZED_SVO_Editor tool to repair it. (Any idea why?) When I read it, the ZED_Explorer tool, for example, gives me a frame rate of ~14fps at most.

Using a ROS2 docker with the proper Cyclone DDS tuning (following your guide), I get an even lower frame rate. ros2 topic hz or rqt topic monitoring tool give me ~4.5Hz with the default parameters. When there is no subscriber, ros2 topic hz gives me 30Hz, but as soon as there is a subscriber, whatever the type (ros2 topic echo / ros2 bag record / rqt_bag / rqt topic monitoring tool / Foxglove), it lowers directly to ~4.5Hz. (rqt topic monitoring tool always shows this value, even if there is no other subscriber.) If I reduce the resolution to 1080, I get 5.5Hz, and if I set the depth mode to ‘NONE’ I get 5.7Hz, not higher.
I have tried monitoring topic rates from a remote PC or on the host Syslogic PC, with no impact.

Why can’t I get better performances with the ZED X camera?

Attached are the results from the ZED_Diagnostic tool:
dmesg.log (46.9 KB)
ZED_Diagnostic_Results.json (62.4 KB)

I recommend recording SVO2 files using ZED Explorer instead of directly recording rosbags.
Then, on a powerful PC, you use the SVO2 file as input for the ZED ROS2 wrapper, and you record the rosbag from the replayed stream

This is correct. The Jetson Orin Nano is the only Jetson module not equipped with hardware encoders. So it’s not possible to use H.264 and H.265 compression with it.

The ZED SDK v5.0.2 GA, released yesterday, is expected to resolve this issue.

Docker always introduces a little performance loss.

This is caused by memory copy on big data. Try reducing the resolution and framerate.

It’s not a depth processing problem because it’s performed on the GPU. Memory copy happens on the CPU.

Remember that the Jetson Orin Nano is not as powerful as an Orin NX or AGX Orin.
You can have an idea by compating the depth extraction performance here.

Hi there. I am having a similar issue, and to my knowledge SVO2 recording is not an immediate option.

I wish to record the raw image data as well as the depth map from two ZEDX cameras. Compute is sufficient and write speed is ~2,500 MB/s. Recording the raw images alone is fine, but when introducing the depth map the recording process becomes completely bottle necked.

The primary reason for not using SVO2 recording lies in the fact that I need to additionally record a variety of other external (very low-bandwidth) topics, and I need all data to be synchronized upon playback. This is certainly possible in post, but real-time is much preferred for our use case.

If anyone has any suggestions please let me know.

Hi @nathanredmond123
Welcome to the StereoLabs community.

SVO2 does not store depth information, only raw stereo images and sensor data.
If you enable depth processing, you add a not required load to the GPU and CPU.

SVO2 allows you to store external information as metadata. Please find examples on GitHub.

Hi Myzhar,

Thanks for the quick response. I should’ve been more clear in my previous message. I am currently attempting to bag the following ROS 2 (Humble) topics obtained via the ROS 2 wrapper:

Note, these topics are duplicated once as we are running the ROS 2 wrapper for dual ZEDx cameras, the namespaces have just been omitted for brevity.

  • /zed/zed_node/rgb/image_rect_color
  • /zed/zed_node/rgb/camera_info
  • /zed/zed_node/depth/depth_registered
  • /zed/zed_node/depth/depth_info
  • /tf (externally obtained, published at 50 Hz, only one publisher)
  • /tf_static (externally obtained, only one publisher)

When recording all of these topics via a ROS2 bag using either mcap or sqlite3, I notice that nearly every single message is dropped. I will see between 0 and 2 messages recorded per topic.

It should be noted that I have tested this on a Jetson AGX Orin, and have also tried to reproduce the topics by passing an SVO2 to the ROS 2 wrapper on a Lambda Workstation with 3 A6000s, 512 GB of memory and a Threadripper, following the same ROS 2 bagging process.

The total combined bandwidth of these topics over 2 ZED X cameras is just under 500 MB/s, which both systems should be able to handle with ease (CPU is not overloaded).

This is not an issue with processing depth. In performance mode, we see the exact same behavior, and the depth_registered topic is published at the expected frequency at up to 30 FPS.

When recording ONLY the /zed_node/zed/rgb/* topics mentioned above, I do not see this issue. My guess is that the DDS (Cyclone in our case, with a modified SocketRcvBufferSize of 500 MB) simply cannot handle the throughput.

As mentioned, the reason for bagging the data in this way is that we need to include the /tf and /tf_static topics, and need to ensure that they are closely synchronized with the Zed data.

You mentioned documentation for including external information as metadata, but I do not see anything in repo you supplied that would allow me to include the tf data so that it can be played back alongside the Zed data. Is there something you could share here?

Ideally, I would like to be able to pass the SVO2 containing the tf data to the ROS 2 wrapper again, such that all appropriate Zed topics are published with the /tf and /tf_static topics (as if we are playing a bag file). Of course, if resolving the ros2 bag record bottleneck is possible, that would be preferred.

UPDATE: The issue with dropped data packets when recording the aforementioned topics via ros2bag was actually caused by recording the /zed/zed_node/depth/depth_info topic. Being that the camera_info was necessary to visualize image_rect_color topics in Rviz, I assumed that this would also be the case for the depth_registered topic when visualized as an image.

However, it seems that that no data is actually published over this topic when launching the ros2 wrapper with an svo2 file. When trying to bag this topic, for some reason the ros2bag recorder drops every single message over every subscribed topic. As soon as depth_info is omitted, I see the expected behavior. Seems like this is a bug in ros2bag more than anything. I’ll post here again if I find anything else.

Thanks!