Running TensorFlow image classification on images from labeled test video

I was able to successfully run a test video in my trained deeplabcut network and the feature I’m tracking was labelled as well.

I wanted to extract images from this video to run image classification on frames which contain the tracked feature. However, I’m not sure about an easy way to do this because even if the resulting video contains labeled frames of my tracked feature, it also includes frames where the feature isn’t present.

Therefore I would like to avoid feeding unusable frames into my image classification algorithm on TensorFlow. As my datasets are bound to get very large is there any way to automatically extract labeled frames from the test video?

Thank you!

Yes, DeepLabCut also gives you a likelihood (see Fig 7a in https://www.nature.com/articles/s41593-018-0209-y). Thus you can look for all frames with confident detections.

Thank you for your help!

In the csv output file from analyze_videos, I am able to see likelihood values for each frame. How would I go about obtaining the actual frames referenced by indices in the csv file? I’m assuming the variables PredictedData and pdindex in, predict_videos.py --> AnalyzeVideo function, are what I have to be using to do this. In case there’s a simpler way I would love it if you could point me in the right direction!

I’m not clear on what you are asking; the csv/h5 file output gives:
Frame index, X, Y, confidence readout

Here is an example how to load the data for each frame etc: https://github.com/AlexEMG/DLCutils/blob/master/Demo_loadandanalyzeDLCdata.ipynb

1 Like