My laboratory is interested in using deeplabcut to determine the occurrence and duration of target behaviors during classic laboratory tasks (open field, radial arm maze, operant conditioning chambers, etc.).
Setup has been a breeze (thank you to the wonderful developers for the informative and clear documentation). I am able to produce videos with a great agreement between the model and human-placed labels.
The videos look beautiful, but I’m a bit lost on how to proceed with our goals of classifying the output for the appearance of target behavior.
My intuition is that the most robust, generalizable method to do this is to run the csv output through second DNN trained with DLC input from frames manually determined to include that behavior? I’m, at best, at an intermediate level with python so I’m hoping someone here who has approached this problem can point me in the right direction.
I apologize if this is a bit beyond the scope of this forum, but I thought this might be the most likely place to find someone who has successfully approached this task.