Documenting pixel classifier creation for methods section

Hello everyone,

During a virtual image analysis club today, someone asked a great question: how do you document the process of training a pixel classifier?

A few of us shared our approaches, but I’d be interested in hearing some of y’all’s to see if there is a consensus for best practices. I doubt reviewers accept “I finger painted the image until it looked good” !

  • Trevor
5 Likes

This is actually what’s done in many cases although often not written as explicitly. For classifiers in general, the output depends largely on the quality of the training set and building a good training set is often an iterative process with some subjectivity in deciding what goes in which class.
What is the purpose of the description? Is it for the reader to understand what’s been done or for the process to be reproducible? The later would probably require giving more details than the former.

2 Likes

The neatest way I can think of is sharing the dataset and annotations that were used for the classifier to let your peers decide wether you were unbiased or not.

In QuPath, this is easy, just share a project where you did the annotations. Ilastik is easy too. I do not know about others though… But it should probably be standard to be able to share the manual annotations in order to be able to rebuild the classifier as needed

6 Likes

You might be able to use the I give my students for trainable weka segmentation. I tried to identify all the portions of the training that would allow one to reproduce the training. OOPs…it looks like I can’t upload a word file here. Please email me at ron_despain@hotmail.com and I’ll send you a copy.

1 Like

Glad to see I’m not the only one who has trouble eliminating some of the subjectivity :sweat_smile:

Saving annotations for later/peer review is also the approach I take. I’ve also heard of just recording the entire training session for easy review later.

One cool approach would be to have some metric that is plotted each time you iterate with a preview of the classifier so you can see some numerical estimate of “accuracy” that would theoretically converge with added annotations. But I could see how certain metrics would only work with certain datasets containing certain features, etc.