Velocity and Distance inference from keypoints?

Hi ,
After following the introduction videos and going through the jupyter notebooks, I have managed to get poses of interest. I also have a set of keypoints extracted in the csv files. I was wondering how can i measure or estimate

  • Velocity . Considering , for ex i track only the x co-ordinate of the nose . The frame rate is 30 fps. In the scenario, I have the side view of the animal moving horizontally . ( I guess one can calculate only the average velocity , not frame by frame ?)

  • In the same scenario, I wanted to measure the distance traversed as well .

  • Calculate the angles between keypoints. For that i guess one needs to normalize the keypoints , before treating them as vectors and finding the angles.

I am not sure if the above is possible by just following the keypoints or one needs additional information .

Thanks in advance .

You can calculate the instantaneous velocity of the nose using the simple formula: (x2 - x1) / dt, where x1 and x2 are the coordinates in frame 1 and frame 2, and dt is the time elapsed between these two frames; e.g., if you pick two neighboring frames, dt = 1/30.

To measure the distance travelled, you could take the cumulative sum of the frame-by-frame absolute displacements. That is, if your animal moves 15 pixels from frame 1 to frame 2, then steps back 20 from frame 2 to 3, and forward again by 50 pixels from frame 3 to 4, total displacement is 85 pixels. Note that you’d need some sort of scale to convert that back into physical units.

For the angle between keypoints, use the deeplabcut.analyzeskeleton function: it writes a csv file containing lengths and orientations of the “bones” defined in your config.yaml.

Thanks for the reply . so basically (x2-x1)*30 = velocity . I am wondering what would be the units of it because the ,y co-ordinates are plain numbers.

For measuring the distance is there some documentation that could be read.I got a little confused by your explanation.

I realized the pre trained model had no definination of skeleton. I guess there is example of multi animal for which the skeleton is defined . I can perhaps look into that.

Units will be pixels per second.
The distance covered is the sum of all the tiny displacements of your animal from a frame to the next. To define your own skeleton, you could either use our graphical method deeplabcut.SkeletonBuilder or manually define a list of bodyparts to be connected together (where you could also look here) :slight_smile:

1 Like

Thank you so much .

I guess there must be some kind of pre calibration done to convert pixels into physical units such as meters / sec or km/sec. Is there a method that is followed to determine that. I think the pixel into other physical metric is impossible to determine just from a video sequence.

The distance covered is the sum of all the tiny displacements of your animal from a frame to the next.

Does it mean similar to the displacement of keypoints as mentioned before . So for example

frame 1 x1 - dist traveled x1-x1
frame 2 x2 - dist traveled x2-x1

frame 9 x9 - dist traveled x9-x1

I am just considering a case, the object moving from left to right .
I did find the config.yaml file . I will experiment on that and will update .

You could do this adding a ruler in your camera field (so you’ll know what the pixels -> cm conversion is).
The way you calculate it is correct if you are just interested in the minimal distance traveled (i.e., simply the distance between start and end point).


Measuring conversion from pixel to scale seems to be nearly per pixel or perhaps you mean

  • Take a still picture
  • Calculate number of pixels width wise
  • Place a scale parallel to the image and measurement the length in centimeters to get the conversion factor.

Thanks for the tips.

As the footage is from a smartphone i wonder if there could be some in built features from the smartphone could be used .

A small question about skeleton. I tried to connect three parts following your suggestion in config.yaml
like following. Perhaps this is not the right way to do so ?But i havent used the GUI perhaps its better there ?
Manually building the skeleton example

skeleton: [['Stifle', 'Offhindhock'],['Stifle', 'Elbow'],['Elbow','Nearknee']]
skeleton_color: rainbow_gist

Hi Jessy,

if we define SkeletonBuilder in a single animal project, then we cannot use this skeleton info during the training but can use it to calculate angles between the key points. Is it correct? I have read several posts that say, one needs to create a ma-project to be able to use the skeleton feature.

Hello @gokce_ergun! You’re correct, skeleton is used during training only in multi-animal project in order to help linking body parts together :slight_smile:

1 Like