Output analysis for behavior classification

My laboratory is interested in using deeplabcut to determine the occurrence and duration of target behaviors during classic laboratory tasks (open field, radial arm maze, operant conditioning chambers, etc.).

Setup has been a breeze (thank you to the wonderful developers for the informative and clear documentation). I am able to produce videos with a great agreement between the model and human-placed labels.

The videos look beautiful, but I’m a bit lost on how to proceed with our goals of classifying the output for the appearance of target behavior.

My intuition is that the most robust, generalizable method to do this is to run the csv output through second DNN trained with DLC input from frames manually determined to include that behavior? I’m, at best, at an intermediate level with python so I’m hoping someone here who has approached this problem can point me in the right direction.

I apologize if this is a bit beyond the scope of this forum, but I thought this might be the most likely place to find someone who has successfully approached this task.

perfect place to ask, and others may jump in. Firstly, thanks for the nice feedback - it’s really really appreciated! :slight_smile: Secondly, you might try this ETH-DLC analyzer for exactly the tasks you are describing, they have classifiers for this, and of course open source so you can modify. (linked here, and some others! https://github.com/DeepLabCut/DLCutils)

Otherwise, I use scikit-learn tools for classification. You might want to not use raw position, but for example, velocity, if you want location-independent behaviors classified.