Setting *lower* as well as upper confidence bounds for outlier selection?

Many of our images have features that are definitively not in the frame, and the model is generally good at labeling these with confidence values very close to zero.

We’d like to be able to grab points for labeling and retraining that are uncertain (confidence ~0.5) rather than certainly present (e.g. 99% confidence) or certainly absent (e.g. 1% confident).

Is there a way to do this? If not, are there plans to include such a feature?

This isn’t currently implemented, as it’s not a simple solution to find outliers. If you want to add this, please feel free to make a PR! In the meantime, you may consider trying out the “jump” and SARIMAX model that is also built into the function, or manually grabbing outlier frames. Not all instances of errors need corrected, so even a handful is typically enough to cover these edge cases. Also triple check your original labels (you can edit them by re-loading into the labeling GUI), as these will be another source of errors.

(Here is where you can edit the code: https://github.com/AlexEMG/DeepLabCut/blob/master/deeplabcut/refine_training_dataset/outlier_frames.py#L22 & https://github.com/AlexEMG/DeepLabCut/blob/master/deeplabcut/refine_training_dataset/outlier_frames.py#L145)

1 Like

Thanks for pointing our where to try experimenting, I’ve edited the script on my home system and will try it out tomorrow :slight_smile:

I think our use case is somewhat odd because we know for certain there will be long periods where the animal is out of view, so many of the low-confidence values the model produces are actually high confidence that the body part is absent. This makes it sub-optimal to draw from the set of all low-confidence values.

ah, I see! in that case, perhaps you could want to use only parts of the videos to find the animal, i.e. crop the video or use the “start” and “stop” like with extract_frames - but then also “jump” should work well, as when it’s out of the frame it’s not jumping :slight_smile: