Multi-camera body tracking sample not working with ZED SDK 4.1

I have received a pre-release of ZED SDK 4.1, and attempted to run the multi-camera sample which does not work. I am sending data via the following script running on all zed boxes (one per camera) - they are sending data over the local network:

#include <sl/Camera.hpp>
#include <sl/Fusion.hpp>

int main() {
    // Create a Camera object
    sl::Camera zed;

    sl::InitParameters init_parameters;
    init_parameters.depth_mode = sl::DEPTH_MODE::ULTRA;
    init_parameters.camera_fps = 30;
    init_parameters.camera_resolution = sl::RESOLUTION::HD1080;
    init_parameters.coordinate_units = sl::UNIT::METER;
    init_parameters.coordinate_system = sl::COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP;
    auto state = zed.open(init_parameters);
    if (state != sl::ERROR_CODE::SUCCESS)
    {
        std::cout << "Error with init parameters: " << state << std::endl;
        return false;
    }   

    // Set camera settings
    zed.setCameraSettings(sl::VIDEO_SETTINGS::GAIN, 1);
    zed.setCameraSettings(sl::VIDEO_SETTINGS::EXPOSURE, 100); 

    // in most cases in body tracking setup, the cameras are static
    sl::PositionalTrackingParameters positional_tracking_parameters;
    // in most cases for body detection application the camera is static:
    positional_tracking_parameters.set_as_static = true;
    state = zed.enablePositionalTracking(positional_tracking_parameters);
    if (state != sl::ERROR_CODE::SUCCESS)
    {
        std::cout << "Error with positional tracking: " << state << std::endl;
        return false;
    }

    // define the body tracking parameters, as the fusion can does the tracking and fitting you don't need to enable them here, unless you need it for your app
    sl::BodyTrackingParameters body_tracking_parameters;
    body_tracking_parameters.detection_model = sl::BODY_TRACKING_MODEL::HUMAN_BODY_ACCURATE;
    body_tracking_parameters.body_format = sl::BODY_FORMAT::BODY_38;
    body_tracking_parameters.enable_body_fitting = true;
    body_tracking_parameters.enable_tracking = true;
    body_tracking_parameters.allow_reduced_precision_inference = true;
    state = zed.enableBodyTracking(body_tracking_parameters);
    if (state != sl::ERROR_CODE::SUCCESS)
    {
        std::cout << "Error with body tracking parameters: " << state << std::endl;
        return false;
    }

    int port = 30020;
    sl::CommunicationParameters communication_parameters;
    communication_parameters.setForLocalNetwork(port);

    zed.startPublishing(communication_parameters);

    sl::Bodies bodies;
    sl::BodyTrackingRuntimeParameters body_runtime_parameters;
    body_runtime_parameters.detection_confidence_threshold = 40;
    // body_runtime_parameters.skeleton_smoothing = 0.7;

    std::cout << "Starting to send data :)" << std::endl;
    // as long as you call the grab function and the retrieveBodies (which runs the detection) the camera will be able to seamlessly transmit the data to the fusion module.
    while (true) {
        if (zed.grab() == sl::ERROR_CODE::SUCCESS) {
            // just be sure to run the bodies detection
            zed.retrieveBodies(bodies, body_runtime_parameters);

            // Print the number of detected bodies (length of bodies.body_list)
            std::cout << "Body format: " << bodies.body_format << "" << std::endl;
            std::cout << "Number of bodies detected: " << bodies.body_list.size() << " bodies" << std::endl;
        }
    }

    // Close the camera
    zed.close();
    return 0;
}

As you can see, I am logging the bodies detected, and the senders are detecting bodies.

The receiver is the one in the multi-camera sample. The senders are registered and I do get the metadata from each camera, however, no bodies are detected from any of the cameras (I have logged this and no bodies are detected).

Both the computer running the receiver and all of the zed boxes have ZED SDK 4.1 installed.

Bump - anyone know the syntax changes introduced in SDK 4.1?

Hello @haakonflaar,

If I understand correctly, you have a pre-release version of the 4.1. I’m sorry for the inconvenience, but the API changes will be announced when the 4.1 is released publicly.

That being said, I don’t see why your code would not work, it seems straightforward enough, and there should not be breaking changes in the Body Tracking fusion that I recall. I can only think of a mismatch between either:

  • body models (38-34-18)
  • SDK version (but it seems it’s not, from what you’ve said)
  • fps in the senders, which would prevent synchronization

Also, you should not have to enable tracking and fitting in the senders for the Fusion to work, we even recommend disabling them to gain fps.

Hello @JPlou :slight_smile:

You are correct - I am using a pre-release of SDK 4.1 that is installed on all zed boxes which the cameras are connected to (one box per camera), as well as the computer I am running the receiver on.

I also think it is strange why the code is not working - in the image below I’m showing the sender logs for 3 of the 6 cameras (upper 3 command lines) and the fused logs showing the receiver’s detection of each individual camera as well as fused. The fused receiver receives no body data from the senders although two of them are clearly detecting bodies. The cameras are connected to the receiver though, as you can see from the camera FPS shown in the 3D viewer.

To address your bullet points:

  • All receivers are using body model 34. I have tried using body 38, and the sender are still detecting bodies, however, the fusion receiver is still not receiving any bodies. I have not specified a body model in the receiver - should I and, if yes, how?
  • I am using the Windows and Jetson (ZED_SDK_Tegra_L4T35.1_v4.1.0) SDK 4.1. Have you made updated to the SDK pre-release installers lately? If so, please send me the newest versions.
  • The FPS of the senders (as you can see from the image) is about 13-15 which should be good enough to run fusion…?
  • If I don’t enable tracking and fitting in the sender, the fusion does not work - I am not sure why - maybe this is where the error is?

EDIT: If you have the chance, I would greatly appreciate if you could quickly test the sender script from your side with the same SDK 4.1 setup.

EDIT 2: By the way, using the exact same scripts (sender and receiver) works when the current ZED SDK 4.0.8 is installed on the zed box and windows computer - so it is clearly an issue with the updated SDK 4.1.

Edit 3: Running ZED 360 on SDK 4.1 and setting up the senders via local network IP successfully connects to the cameras, but no fused data is registered by ZED 360 either.

Hi @haakonflaar,
Sorry for the delay in the answer, you’re right, there is an issue with that. :bowing_man:
We’re in the process of investigating this, it will be fixed for the official release.

1 Like

We seem to be having the same issue. We have 6 Zed2i cameras with a zed box each. We just upgraded to 4.1 on all of the zed boxes and control server. When we run the Zed360 calibration tool, the cameras are no longer detecting bodies. When we view the raw camera data, all six skeletons show up, but the fusion does not work so it cannot calibrate. The ratio detection stays at zero.

@Analogueartists

I’ve not reproduced this behavior yet, but it may just be because it’s hidden behind another one where the streaming can have an issue when the machine is under heavy load.

Thank you for the report, we’re actively taking a look at these and will release a fix as soon as possible.

@JPlou

Thank you for getting back to us. I have tried it with ZED360 and also with a custom fusion console app.

I had created a calibration file before we upgrade to 4.1. I was using 4.0.

I wanted to create a new calibration with the new ZED360 and ZED SDK 4.1 It says Fusion: Success
but it never moves the cameras to the new position after I start “Start Calibration” as well as the Ratio detection of each camera remains at 0%, even though we can see skeletons from different cameras.

I ran a custom console app using the previous calibration, as well, and I get the same thing through my view port.

No fused bodies and also 0% Ratio Detection, but I still see individual skeletons.

@Analogueartists

Thank you for the feedback, I shared it with the team, we’re working on it. :hammer_and_wrench:

I also wanted to check with you regarding the calibration process as it relates to this fusion. We have set these 6 cameras up with max depth range but they seem to be moving back to the default depth as soon as we launch the fusion. When testing on a single camera, we get great tracking to about 40meters. As soon as we launch the fusion, the detection drops back to 20m on each camera and our skeletal tracking is no longer consistent. Is there any way to ensure the cameras are for sure set to the max range?