First of all, I should note I have been programming for less than a year, so there are still so many things I need to learn and understand.
I’ve been using deeplabcut for a while and I’m currently working on some Python scripts to analyse common variables in several typical behavioural tasks in neuroscience, such as object recognition, y-maze, elevated plus maze, etc. Because this tool is so cool, now everyone’s working on developing scripts to analyse data from DLC. Actually, it has been recently published a paper in which they developed a package in R to analyse different values from dlc-trackings: https://doi.org/10.1038/s41386-020-0776-y.
Together with @EMerinoM, we’ve been having a look at results in this script and we find that all ideas they present are so great. They’ve done a great job. However, we have one small doubt.
In order to clean up the rows that contain a likelihood < 0.95, I assume that they delete those rows (or that’s what we’ve understood). I was not deleting any row, but correcting and retraining with DLC. However, I see that not 100% of the estimations have likelihood > 0.95. I mean, although I retrain I still get some likelihoods <0.95. I was working with those predictions because I understood that these were predictions made from other rows below and above. Having a look at ‘x’ and ‘y’ coordinates, they did not change so much from coordinates above and below. Here an example:
I noticed that, for the nose tracking for example, when mice are grooming (I work on mice), the nose is not visible in the frame, and that’s why I understand it gives a low likelihood value. However, I understood coordinates were estimated based on previous and following frames. I hope I’m explaining myself…
I want to understand what kind of prediction is done when likelihoods are below 0.95, as I think I don’t really understand it. And I’d also like to know which error do you think is minor: a) deleting rows containing bad likelihood values; b) keeping those rows because they are estimations. I hope I’ve explained …
Thank you, everyone, for your help and suggestions.