How does Deeplabcut deal with bodyparts that are most of the time occluded?

Dear all,

Maybe some of you have experience with this kind of problem.

I have videorecorded mice in their own home cages using a top wide field of view lens. That means the body parts facing away from the lens are most of the time occluded. I am starting to label these frames and wondering how Deeplabcut deals with body parts that will be most of the video occluded. The strategy I was planning to use was to label those body parts only when clearly visible but I guess that implies having to label many more frames for Deeplabcut to learn them, right? Will Deeplabcut then properly only label the parts when visible?

Thanks!

A

Dear @auesro,

You can use DeepLabCut in two very different ways:

  • either you label only visible bodyparts, then DeepLabCut will typically only report visible bodyparts with high confidence. This is .e.g what we did for the mouse reaching and fly data in the Nature Neuroscience paper.. This does not imply that you need to label “more frames”, you just need to label consistently.
  • you can “guess” where bodyparts are (when they are occluded), and the network learns to do the same. Depending on the application that can be what you need. However, it is of course better to make a direct measurement as this is more accurate. A great illustration of the guessing and the biases that DLC learns is given in this paper: Playing magic tricks to deep neural networks untangles human deception.

Happy DeepLabCutting!

Alexander

1 Like

I have similar question as @auesro. Let say I want to label the four limbs of a mouse with each limb having six label points, what do we need to edit in config.yaml that allows DeepLabCut to recognise the four limbs independently? Please kindly send me a sample config.yaml. Thanks in advance.

Ok, I understand your point.
Now, if I go into a more specific question: What would be the best strategy if you plan to use DLC output to feed B-SOID? As far as I understand, B-SOID is going to “fill in” the missing data (body part occluded or invisible) by a time-average looking at past and future frames…in this case I would think the best strategy would be to make DLC learn where those occluded parts are located (strategy 2) since it will always be better than just mathematically calculating missing positions, am I right?

Thanks, Alexander!

Cheers,

A

Yes, you should make bodyparts in the config file before labeling that are limb specific; i.e.

  • LEFTfrontleg_point1
  • RIGHTfrontleg_point1
1 Like

Continuing the discussion from How does Deeplabcut deal with bodyparts that are most of the time occluded?:

Hi Auesro and Alexander,
I have the same question. Do you think it’s better to leave occluded bodyparts unlabeled or guess the location of bodyparts and label it?

Another question is whether for every frame, the Deeplabcut will give a prediction for all bodyparts, even it’s occluded. I remember for refine the model, we are suggested to remove the predicted bodyparts that are invisible, I wonder after several rounds of such refine, how Deeplabcut deal with these invisible bodyparts?

Thanks!
Huasheng

HI Huasheng,

Both strategies are valid. If you want the highest accuracy, then not guessing is best. If you are less concerned, and would rather teach the network to guess, then label when you want it to guess. A great example of this is in this paper with DeepLabCut: https://venturebeat.com/2019/08/21/researchers-attempt-to-fool-ai-with-magic-tricks/

DLC always predicts a bodypart for every frame. That is why p-cutoff is crucial to threshold the data (i.e. you want points that are >90% confident, you set pcutoff in the config.yaml file to 0.9 after you get the analyzed h5 files. ie. to use our tools to create videos this is crucial.

Got it, thanks!

Huasheng

1 Like