Script for batch analysis for classification on >100 slides, possible?

Background

I have developed a pixel classification on QuPath 0.2.3 on large .svs files, which works fantastically. But now I need to apply to the entire project for this to be useful and I am not great at scripting/coding.

Does anyone have a script that I could use to “Run for project” using a saved pixel classifier or any other classifier? Or any suggestions on how to start?

I’ve looked through the docs, but I’m not sure if I should be using JSON or where to start really. Any help is much appreciated.

Hi @elijahedmondson,

This script creates annotations based on a pixel classifier’s output:

def minArea = 0.0 // To change
def minHoleArea = 10.0 // To change
def classifierName = "classifierName" // To change

// Select all annotations
selectAnnotations()

// Apply pixel classifier inside them
createAnnotationsFromPixelClassifier(classifierName, minArea, minHoleArea)
print "Done!"
1 Like

Yep, it really depends on what you want to do with the classifier. There are different functions for each. Did you just want output measurements? Are you running something else within the annotations? Would detections be easier to handle?

The classifier is more of an idea, what you want to do with it is what is added to the script.

1 Like

Thank you very much for your help.

For this project I would like the % of pixels classified as tumor vs stroma vs necrosis.

The relevant script should mostly write itself (i.e. turn up under the Workflow’) so long as you are using a project and save the pixel classifier inside the project.

1 Like

That reminds me, I noticed when you save and apply object classifiers the first time, they do not save (say, building a Composite multiplex cell classifier, followed by Save and apply). The user needs to “Load” the classifier a second time for it to show up in the workflow. I suspect that helps prevent spam with the individual classifiers if you keep saving and overwriting, but for the composite classifier it might be nice to add it to the workflow immediately.

@elijahedmondson As Pete says, all you need to do is generate the Measurements (with the button) when you load your classifier. Add that into your script for your whole project. I think the results show up in the “Image” so make sure that is what you are Measurement->Export measurements.

1 Like

This worked exactly as I needed, but I changed to createDetectionsFromPixelClassifier()

setImageType('BRIGHTFIELD_H_E');
setColorDeconvolutionStains('{"Name" : "H&E default", "Stain 1" : "Hematoxylin", "Values 1" : "0.65111 0.70119 0.29049 ", "Stain 2" : "Eosin", "Values 2" : "0.2159 0.8012 0.5581 ", "Background" : " 242 243 242 "}');
runPlugin('qupath.imagej.detect.tissue.SimpleTissueDetection2', '{"threshold": 210,  "requestedPixelSizeMicrons": 20.0,  "minAreaMicrons": 10000.0,  "maxHoleAreaMicrons": 1000000.0,  "darkBackground": false,  "smoothImage": true,  "medianCleanup": true,  "dilateBoundaries": false,  "smoothCoordinates": true,  "excludeOnBoundary": false,  "singleAnnotation": true}');
selectAnnotations();

def minArea = 100.0 // To change
def minHoleArea = 100.0 // To change
def classifierName = "4way" // To change

// Select all annotations
selectAnnotations()

// Apply pixel classifier inside them
//createAnnotationsFromPixelClassifier(classifierName, minArea, minHoleArea)
createDetectionsFromPixelClassifier(classifierName, minArea, minHoleArea)
print "Done!"
2 Likes