Purpose of deeplabcut.DownSampleVideo(video_path, width=300)

Hello,

I am relatively new to Deeplabcut and am just learning the ropes. I came across the COLAB_DLC_ModelZoo to try out a few videos of mine. That so far also works quite well. However I have a few questions concerning the method: deeplabcut.DownSampleVideo(video_path, width=300)
1st question: Why does this function make my result better? (Had recorded video with iPhone)
I can understand that analyzing is much faster since you need less pixels. However, not sure why the downsampled video gets a top result while the non-downsampled video assigns points very poorly.
2nd question: Why does the downsampled video change format?
The videos I have downsampled have the same format after downsampling, but after analyzing the video, the format has widened, so the image has distorted. This is not the case with the non-sampled ones.
Thank you in advance.

Best regards

Louis

Hi Louis, welcome to the community.

The purpose of the function, in general, if just to help users.

In the model zoo, if you use a video from an iphone and want to analyze a human, the video is very large compared to what it was trained on, namely MPII Pose. Typical image sizes in computer vision are closer to 256 by 256; so these models don’t scale well if the human is massively bigger. That is why you see a performance difference.

If you are interested in scaling performance, we just published a paper that includes this rigorously, see our eLife paper (check out suppl. figs ;): https://elifesciences.org/articles/61909

You will have to set the dimension yourself for the downsampling, btw, so it’s it’s stretched, just adapt that one line in the code.

Thank you for the quick response. Now I have understood!