Classifier evaluation

Hi all,

I’ve made a random forest classifier for detecting fos positive cells in the mouse brain after segmentation of DAPI. I need to take a look at my annotations (fos positive and fos negative point annotations) and compare this training data with classifier performance. It seems that there’s no inbuilt function for calculating F1/Precision/Recall, but is there an easy way to pull data on how I annotated a detection versus what my classifier is detecting using the test or validation sets that I make?

Ideally, I would have output such as: x detections in region, y annotated as fos positive, z of the y annotations captured by classifier as true positives.

Hi @goodwinnastacia,
I created a few scripts that do something similar - but they have not been validated by any publication or similar - they simply allow you to create a fixed area, manually annotated the cells within that area, and then generate a confusion matrix - you can do what you want with the confusion matrix (output is a CSV file).

If you are interested in trying them out, I could message you those, but I do not know of any built in way to do what you are asking. The closest was the confusion matrix generated by the deprecated detection classifier.

That would be much appreciated, thank you so much for the quick reply!