Evaluation of QuPath's classifier

Hi all

We have trained two object classifiers in QuPath and would like to evaluate their performance with confusion matrix and ROC analysis in additional validation set annotated in QuPath. I am wondering how can I perform the said analysis in QuPath or somewhere else? Thank you for your help!

I’m afraid there’s no built-in way to do this in QuPath currently. Validating such classifiers meaningfully is hard to do, particularly when the classifier has been trained interactively by providing hard examples and when nearby objects have highly-correlated features. This makes the training samples far from representative of the data as a whole, and so splitting these into training/validation/test sets wouldn’t give very meaningful results – rather, somehow another set of annotations would need to be generated to assess performance.

You may be able to get some of what you want by exporting detection measurements (with their classifications) in QuPath and evaluating these elsewhere (e.g. R, Python). But I think that figuring out what would be meaningful will be quite tricky and application-specific.

2 Likes