Retrieving bodyFusion data per joint problem

Hello all,
I have been trying the ZED360 and fused model using 2 cameras and overall it seems good. However, I am not able yet to get the data of the joints, which I was able to do with 1 camera previously. My problem may probably be related to the documentation and using the functions properly.
What I used to do was to have a variable like the following:
auto object = bodies.body_list[0];
then I would get each datapoint one by one like this
However, when I am trying to do a similar thing with the fused data, VS stops and exits.
What I have tried to is the following:
// fusion outputs
sl::Bodies fused_bodies;
auto object = fused_bodies.body_list[0]; //problematic line
std::cout << object.keypoint[2] << std::endl;

For some reason, the code doesn’t execute (fused_bodies.body_list[0]) properly, which prevents me from accessing the data points. The output of the code is the following:

0 0 0
0 0 0

D:\ … \ZED_BodyFusion.exe (process 29236) exited with code -1073741819.
Press any key to close this window . . .

I made sure that there is a human in the frame for detection, and I don’t know why I am not able to get the data.
All your support is much appreciated,

Hi @feraskiki,

Do you check the size of body_list? Is there something inside it?

The size of it is actually zero (it is empty). I don’t know why that is. I am able to see the skeleton model, so it should be working. Right? Am I missing something?

Are you modifying the Fusion sample, or coding from scratch?
You should get the data you want after checking that there is new data:

hello again,
Thank you for your reply. The file is now recording position data, but I am still having a problem with retrieving body joint data. I am having this problem in both the fusion mode and the normal mode. My problem is that I am getting only 0 or 1 and not a meaningful value for all quaternions x=0 , y=0 , z=0 , w=1.
I have tried to write the output into a file and looped over all joints (0, 34) and I got the same problem for all of them. Since the model visual output on the screen is good, this should be only a problem of data retrieval. The method I am using is very similar to what I used to do on the 3.8 SDK with single camera, but I am not able to get joints orientation properly in 4.0 SDK.

All your help is much appreciated.

NOTE: I am not coding it from scratch. The code I am using is a modification of the examples given for body tracking.

This is an example of my code

        if (fused_bodies.is_new) 
            if (size > 0)
                auto object = fused_bodies.body_list[0];
                std::cout << "pnt 15: " << object.keypoint[15] << std::endl;
                std::cout << "pnt 15: " << object.local_orientation_per_joint[15] << std::endl;
                std::cout << "Empty" << std::endl;
                // body_list is empty

I think there is a bug in the SDK. The orientation data should be present in BODY_34, and I couldn’t get it either. We’ll take a look and fix it for the next patch.

For BODY_34, the keypoints are generated from the BODY_18 skeleton. Some, like the spine, are completely inferred and have a fixed orientation. Some of your keypoints should have an orientation, but chances are if it’s inferred it will not.

You can compare the skeletons from the documentation to see where you should expect an available rotation. That is, on the “real” keypoints from the BODY 18.

Sorry for the confusion, I tested wrongly :upside_down_face:

Thank you for your reply. I am having a similar problem even in the normal BODY_38 (without camera fusion). What could be the reason?
This is the code I am using

            if (file_open)
                myfile << std::chrono::duration_cast<std::chrono::milliseconds>(end - begin).count() << ",";

                for (int i = 0; i < 10; i++)
                    int indx = arr_orient[i];
                    //(qx, qy, qz, qw)
                    // std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::milliseconds>(end - begin).count() << "[µs]" << std::endl;

                    myfile << indx << "," << object.keypoint[indx].x << "," << object.keypoint[indx].y << "," << object.keypoint[indx].z << "," 
                        << object.local_orientation_per_joint[indx].w << "," << object.local_orientation_per_joint[indx].x << "," 
                        << object.local_orientation_per_joint[indx].y << "," << object.local_orientation_per_joint[indx].z << ",";
                    /*if (i < 10) { myfile << ","; }*/

                myfile << endl;
            else {
                cout << "Unable to open file";

What is your similar problem exactly? Can you share an output?

Sorry, I edited the message yesterday, the 0 orientations are normal on some points.

On body 38, 3 kinds of keypoints may not have an orientation:

  • the keypoints that are extremities
  • the head points since we assume those are not moving relative to each other
  • the points not visible to the camera, for which the fitting may give an orientation if they disappear after having been seen, but should be starting with a 0 orientation (for example on my screenshot, I was sat at my desk so only my upper-body was visible: the knees keypoints are 0-oriented.

I hope this helps.

Thank you for your reply
Actually, I tried getting all available joints, but kept getting this problem. I understand that not all keypoints have orientation values, but the ones I used to get values from (like elbow and shoulder) don’t give me values anymore (besides 0)

Have you enabled the enable_body_fitting in BodyTrackingParameters?

Thanks a lot for your help. This worked.
body_tracker_params.enable_body_fitting = true;
but the tracking was not. After I set it to true, I don’t get the previous problem anymore
body_tracker_params.enable_tracking = true;

I was able to retrieve the data of both position and orientation successfully. However, I am not sure about the values of the position themselves. I tried simple motions (like raising arm) and compared it with the results I got from a single camera and they seem different.
I tried to do the same motion with each of the cameras alone and compared the results with the fusion results, and they don’t match any of the single ones. I am doing similar thing in both the single and fusion codes, so it is probably related to the coordinate system. (there is a difference in units also as the fusion one is in meters, but I paid attention to that)

I am not exactly sure about the coordinate system that is used. I think I read before that one of the cameras is used as a coordinate center and the others are used for increased accuracy. Can I choose which one to be the main one? (or is my understanding wrong?)

Also, can I choose which body_format to use?

it seems that after enabling both

  • enable_tracking
  • enable_body_fitting
    that the body_format is 38, can this be changed to the other ones?

a note from the code regarding body fitting (enable_body_fitting) was this:
“Defines if the body fitting will be applied. If you enable it and the camera provides data as BODY_18 the fused body format will be BODY_34.”

Finally, it seems I am getting double the data rate in the fusion compared to a single camera (54 vs ~23), which is great but I am not sure if this is to be expected or if I am missing something.


Can I choose which one to be the main one?

The main one will be the first in order in the ZED360-generated configuration file. However, the others are not used for “increased accuracy”. They are just located in the 3D reference frame of the first one. The 3D positions output by Fusion will be given in this reference frame as well.

Also, can I choose which body_format to use?

Yes, set it in the senders. The Fusion will use the one of its senders.

it seems that after enabling both

  • enable_tracking
  • enable_body_fitting
    that the body_format is 38, can this be changed to the other ones?

Sure, enable body_18 or 34 in the senders, it should do the trick.

I am getting double the data rate in the fusion compared to a single camera

What are you measuring exactly? Can you provide a code sample or explain how we can reproduce it?

Thanks for the detailed post :smiley:

Thanks for your reply. I will be checking it and share the updates.

What I meant by “double the data rate” is that I am getting more data points per second, which is good.

The code is from the example of the data fusion and I am getting the data in a loop cycle:

I am starting Chrono before the loop as in this line:
std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();

in the loop, I get the time per loop cycle like this:
myfile << std::chrono::duration_caststd::chrono::milliseconds(end - begin).count();

Is the increase in data frequency expected here?