I’m a data scientist at St Jude, working on classifying animal behavior (mice). I’ve been using deeplabcut to track the animal’s pose, before playing with some ML methods to classify behaviors from annotated videos. We’ve been considering using or building behavior annotation tools in napari for this process.
Building an ML classifier is obviously an iterative process – label, train, inspect results, relabel if needed, etc. We’ve been thinking about how to make this as streamlined as possible. This means making the train-inspect-relabel loop as tight as possible. To be a bit more explicit, the ideal process would be something like: the labeling would be performed, the model trained, inference run, the results overlaid in the video in napari, and perhaps frames in the video are suggested to relabel, then retrain, etc. The suggested frames to relabel is a form of ‘active learning’, and can be a vastly more efficient way to train an ML model. But to begin with, just something that can handle both the video labeling and ML training/inference would be a useful start.
I wonder what support napari has or will have for adding such analysis plugins? I do see in the roadmap that support for functional plugins in on the cards. But what form does this take exactly? What would be needed to be developed on our part?
I also wonder if you know of any specific behavior classification plugins that are currently in development in napari along the lines of what I’ve described? I haven’t seen anything listed here: Issues · napari/napari · GitHub. I do know that deeplabcut, for instance, is planning on using napari for its labeling GUI in a future release. Behavior classification can use the DLC tracked poses as input to its algorithm, so having the pose labeling, and then the behavior labeling and classification all take place within napari would be very nice.