I get an error when I try to use body velocity values in the real world

Hi,
I’m trying to make a system that will track the detected person in real time. For this, I have a PLC system and platform that can move on the X axis and accurately follow the speeds I give. However, when I take the velocity on the x-axis with Body.velocity[0] and send it to the PLC, after a while the detected person disappears from the camera angle because the camera is slow. But I think it shouldn’t be like this because I get the speed the camera needs to go from the person detected by the camera. Speed ​​values ​​in the data are too slow compared to reality

I am attaching a hand drawing sketch below for a clearer understanding of the system

How can I fix this problem ?

Hi @emrekocdemir,

That’s an interesting setup!

  • Are you using WORLD or CAMERA reference frame?
  • If the camera is not moving, do you get accurate velocities?

Hi @JPlou,
Can you explain more about first question I did not understand really.
For second question On Friday, I will try to detach the camera from the platform and fix it in a different place and give the data of the person walking in front of it to the PLC. If tracking occurs in this way, I think I can determine that the problem is because the camera is moving.

@emrekocdemir

Looking forward for the results of the test :slight_smile:

Can you explain more about first question

Please read through this page of the documentation: Coordinate Frames

Basically, I’m wondering if your reference frame is moving along the person it’s tracking, decreasing the velocities output.

I’m currently using this as the basis for the project. BodyTracking.py
I don’t give anything as a reference frame, so I’m probably using what the SDK sets automatically. But I don’t know which one it uses.

Then it’s CAMERA (see measure3D_reference_frame).

Hi @JPlou ,
Sorry for delay. I placed the camera in a fixed area and walked in front of it, with the system behind me that I had measured the speed and sent and would follow me with this speed, but the system was still far behind. Then I tried the object tracking module. I put a bag on the system and placed the camera steadily in front of it. I told the PLC to move the system at a speed of 1000mm/second and measured the speed with objecttracking.py. The speed values ​​came in with about 1% error in this module, which is close to perfect. But I still don’t understand why I’m having problems with bodytracking. Is there a solution or settings for bodytracking that you want me to try or that you think will get more accurate results? I’m putting the drawing of the experiment I did with the bag below.

Hi @emrekocdemir

That’s very interesting! There would be a difference in velocity measurement between body tracking and object detection?
Can you try the exact same setup where you walked, but with the ObjectDetection module (set to detect people)?

Also, it would be interesting to plot the detected velocity of the bag and the detected velocity of you walking (with both OD and BT modules). Maybe there are some spikes in the plot lines that will give us clues.

Sure, I will do that and let you know. Thank you for quick respond.

I did with the object tracking with detect person filter. When Camera was stable tracking with plc and platform was really good. After that I tried to attach the camera to platfrom and move the camera with the platform. But this gave really bad results. Then I started searching and found this https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1CAMERA__MOTION__STATE.html

Is the problem that the camera thinks it is stable and the speed gives low results and therefore the results are bad? If so, would it help if I set moving as camera_motion_state as in this link? And how can I implement this because I couldn’t find a document on how to implement it?

Camera motion state information was information coming from the camera’s sensor and was not a parameter that the user could change. when I used the correct usage is camera_moving_state, it appears as Camera.motion.state.MOVING when I print it. So the camera knows it’s moving. But the speeds are very variable and inaccurate. In the video I shared below, the camera moves at a constant speed of 1.5 meters/sec. However, the speed of the detected body is much lower.

Hi @JPlou,
Is there a problem with support or is there no solution to my problem? Can I please get information ?

Hi @emrekocdemir,

Sorry for the delay, for some reason I don’t have notifications for this thread and missed your answers.

In the video I shared below, the camera moves at a constant speed of 1.5 meters/sec. However, the speed of the detected body is much lower.

The skeleton detection is also very inaccurate in the video, so the centroid from where the velocity is calculated probably varies a lot. I recommend using the object detection’s bounding box with the person filter, as you did.

Also, I talked with the team and there may be an issue with how the velocity is calculated that would prevent us from getting accurate speed data when the camera is moving.

If I’m not mistaken, your goal is to move the camera’s platform at the speed of a person it’s tracking, is that correct? If the velocity is not accurate or stable enough, maybe another approach is possible. You could detect a person and try to get it as close to the center of the image as possible, by adjusting the platform’s speed, for example.

I will log the issue of the velocity for investigation though, thanks again for the report.

If I’m not mistaken, your goal is to move the camera’s platform at the speed of a person it’s tracking, is that correct?

This is true, but since the main purpose of the project is the detailed analysis of the athlete, if we cannot measure the speed accurately, can we trust the accuracy of other data? As you mentioned above, I also tried the Object tracking person filter, but the results are still not good. As you said, focusing on the center is a logical approach, but the main goal in the project is the accuracy of the analysis.

Best regards.