Guide: QuPath Multiplex analysis workflow (detailed) pre-0.2.0 release

In case anyone is still coming here from a search, this has been largely simplified and replaced in 0.2.0 with:
https://qupath.readthedocs.io/en/latest/docs/tutorials/thresholding.html
and:
https://qupath.readthedocs.io/en/latest/docs/tutorials/multiplex_analysis.html

Hi all, time for another guide!
Final script for this demo here, if you just want to try running it on the linked image.
Short and sweet version here in the third post if you don’t want to dig through the explanations and just want the scripts.

This will be a walkthrough of how to use a series of scripts to first segment a field of view image, generate cells, classify them, and generate summary data that you can then use for your results. Most of this can be expanded out to whole slide images, however the current tissue detection script (to be replaced by the pixel classifier) will need to be downsampled, and you may run into some fairly severe slowdown issues if you use subcellular detections on very large numbers of cells (hundreds of thousands). Ask away if you have any questions.

Also note, you can combine the Vectra higher mag fields of view into a single whole slide image, if that would be useful.

Starting a project

Much as was described in this post, you will need some sort of a project. How you go about starting that may depend on which version of QuPath you are using, but this guide will assume that you have created a project folder and added your images to a sub-folder of the project for portability. The latter is not a required step, but I find it useful since I often want to drag a shared project on a network drive down to a local drive for increased responsiveness when running many manual operations to determine thresholds. You will need to have readable metadata in your images (pixel size, etc. as seen in the Image tab) for many of these scripts to work, so figure out any BioFormats issues early!

Sample project shown will use the following image from Pete’s blog:

“The original TIFF image used in this post is LuCa-7color_[13860,52919]_1x1component_data.tif from Perkin Elmer, part of Bio-Formats sample data available under CC-BY 4.0.”

Rough steps for 0.1.3 (pretty much the same for 0.1.2)

Create a folder for the project
image
Use that folder to create a project, which will have that folder name at the top of your image list.
image
Add images, although at this step I frequently like to move the image files to the newly created project folder. This allows me to drop the whole project onto a thumb drive, upload it to a shared drive or network folder, and have everything be self contained.
image
Once the images are positioned, import them.
image

Metadata for grouping images, blinding option.

If you have multiple patients, groups, or some other subset of your project, you can either create a project for each, or you can assign metadata so that scripts will behave differently per group. This is sometimes necessary due to variations in staining. While image names can be blinded through an option in Preferences (in 0.2.0m# versions), if you want to be blinded by group, you may want to have an associate create the metadata tags for you.
image

Becomes what you see below when sorted by metadata, roughly as shown in this YouTube video. For 1.3, I had to add each metadata value to each file individually; selecting multiple files at once for metadata was added later. If you need blinding have someone not doing the analysis do this step, and keep track of what the metadata means. Also note that the sorting by metadata option will have to be re-selected, if you want that kind of ordering, each time you open the project.
image
At this point in M4, I would turn on blinding, then turn the program back over to whomever was performing the analysis.

One final note on the project creation, which was mentioned in the Adventure guide, is that I will name my qpproj file MyProject.qpproj, so that I can easily access it by right clicking on the QuPath icon on my taskbar (Windows). This function no longer works as of 0.2.0m#, but it was nice while it lasted!
image

Tissue segmentation/areas of interest/assigning classes

The first step in almost any project is creating an annotation to work within. If you are running the Tissue detection script, a rectangular outline of the entire image will be generated automatically.

createSelectAllObject(true);

The true, in this case, indicates that the object is considered selected, so any further steps, like cell detection, would run on it without a selectAnnotations() step.

In this case, I am interested in the tumor area and the stroma area separately.

There are a variety of ways to create annotation areas of interest, one of which is superpixels with a classifier, followed by converting to annotations, while the one I will use here is a script that downsamples the image (if necessary), sends it to ImageJ where it thresholds on a linear combination of all channels, and then returns the detected area as an annotation.

Script here.

The first version of the script provides a user interface that allows you to (fairly) easily play around with the variables to test out what sorts of values work for a given image. I recommend keeping notes of what works and what doesn’t for future reference, as things might get complicated between various images. In this case, though, it is quite simple since the exact same image was used for each group.
image
The area I would like to segment is the teal, channel 6 dense pockets. I will call that my Tumor region, with everything else being the stroma. There are several variants of this script, depending on the version you are using. For 1.2 and 1.3, you may need to swap between getAvailableChannels() and availableChannels(). If you run into any issues, just ask. This script will likely be obsolete once the pixel classifier is fully implemented. In M4 it is currently broken, but there are fixes that allow it to work in M3.


I generally would recommend NOT checking the Split unconnected annotations, as that causes the most slowdown, and eventually crashes the program.

Downsample: Needed so that you can send the entire image to ImageJ, which is frequently not capable of handling the full resolution of the image.

Sigma: Gaussian filter applied in ImageJ, helps to smooth out the intensity distribution, which can help fill in holes in a region of the tissue (caused by nuclei, for example). Too large of a value can cause halos of empty space around your region of interest.

Lower and Upper threshold: Depending on the bit depth of your image, your background, and any incredibly bright artifacts, you will be adjusting these in order to pick up a band pass of the total intensities summed from the weights below. Unless you do have some very bright artifacts, most situations would not need an adjustment of the upper threshold.

Channel weights: By default they sum to 1, over however many channels you have. For this example, I am only interested in channel 6 so I would set the rest to 0. If you have two or three markers that are of interest (say islets in the pancreas glucagon, insulin, somatostatin, etc), you could find a linear combination of each that allows you to threshold tissue areas that are a patchwork of channels.

Run: By default, the script drops a box around the entire image, and Run runs the script within that area. However, you can adjust the size and shape of this default image manually if you want before clicking run.

Reset: Undo the previous attempt. If you “run” more than once, it will keep running within the newly created area. This option only stores one previous attempt.

Remove Small and Fill holes: These options use square microns based measurements to remove areas smaller than a certain threshold, or fill in holes smaller than a threshold (nuclei). These do not take effect when Run is clicked, you need to use them individually for the GUI script. Remove Small can be very slow when run, as it needs to “Split unconnected annotations” in order to perform its function.
image
I took a look for some fainter areas (green dot) I wanted to still pick up as positive and added something a little below that to the Lower Threshold box (any sigma will reduce your expected threshold). I also drew a small object with the annotation tool to get a rough estimate of the size object I would like to exclude. I added the area values in at the bottom.

These values resulted in the following once Remove Small and Fill Holes are run:
image
This looks good enough for the demo, and I already have a screenshot of the settings I want (these are not saved anywhere, copy them down!), so now I take a look at the Annotations tab and… woah, that’s a lot of annotations! That’s right, I split them, so now I’ll want to merge them all back together so that they are all treated as one large block. In the GUI I can use Objects->Select->Annotations and Objects->Merge selected annotations to end up with one Tumor tissue annotation. Of course, it isn’t really a tumor annotation until I select it, right click on it, Set Class , and assign it the Tumor class!

Directly above the Merge selected annotations option is Make inverse annotation , which I can then use to fill in the rest of the tissue area. I will assign that one the class “Stroma.”

Turning on Fill annotations (Shift+F) in the View menu, I can see the result is what I expected.
image image
Ok, that’s great and all, but that was all very manual. I wanted a script to do this all for me for ALL of my images. Well, the first step is getting the non-GUI version of the tissue detection script here, and modifying all the values to match the ones used to generate the annotations manually.

There is one additional option I added into the noGUI version. I forget the exact project, but it was necessary to remove very large background objects, and I included a removeLargerThan variable, which can be set to 99999999999999999 or some very large number. Also, make sure that the weights matrix has the appropriate number of channels! That has tripped me up a couple of times. I end up with the following:

//v1.0

//This version strips out the user interface and most options, and replaces them with variables at the beginning of the script.

//I recommend using the UI version to figure out your settings, and this version to run as part of a workflow.

//Possibly place whole image annotation with another type

createSelectAllObject(true);

def sigma = 1

def downsample = 1

def lowerThreshold = 3

//calculate bit depth for initially suggested upper threhsold, replace the value with the Math.pow line or maxPixel variable

//int maxPixel = Math.pow((double) 2,(double)server.getBitsPerPixel())-1

def upperThreshold = 65535

double [] weights = [0,0,0,0,0,1,0,0]

def smallestAnnotations = 300

def fillHolesSmallerThan = 300

def removeLargerThan = 9999999999999999999

*You may want to add a clearAllObjects() to the beginning of the non-GUI script in case you want to run it over and over. By default, it will simply overwrite the previous results without deleting anything, resulting in a huge mess.

What about the rest? Well, those were fairly simple steps, and we just need to add a few lines in at the end to replicate them. First, let’s select our annotations and merge them.

selectAnnotations()

mergeSelectedAnnotations()

Then, get that object, which will be at position 0 (because coding stuff) in the array of objects since it is the only thing there, and assign it the Tumor class.

tissue = getAnnotationObjects()

tissue[0].setPathClass(getPathClass("Tumor"))

Then, I want the rest of the area to be defined as stroma, so I’ll make an inverse annotation…

makeInverseAnnotation(tissue[0])

And then assign any annotations that are not already class Tumor to class Stroma.

stromaNaming = getAnnotationObjects()

stromaNaming.each{ if (it.getPathClass() != getPathClass("Tumor")){it.setPathClass(getPathClass("Stroma"))}}

To test things, I can delete all objects (Objects menu) and run the script. Looks good!

For complex projects, I tend to keep each step in a separate script until the end, so I would save this as “Step 1” or similar, and move on to cell detection. It also would let me delete all of these (potentially huge for a whole slide image) annotation areas and create small targeted ones for cell detection testing.

*Side note: while the script, by default selects all annotations for the first step to detect Tumor areas, you could iterate and run a similar, noGUI version, on just the Stroma area in order to create an “Empty” annotation.
This point is also where I would recommend running the script, as slow as it might be, for the whole project. Maybe in the background, or on a separate computer, but skim through the results and make sure nothing too weird is going on! Meanwhile…

Cell segmentation (more details on these settings here)

*In the same way that you could run the tissue detection within a particular region, you could also run different cell detections within different annotations. Be VERY clear about the fact that you are doing this, and why, if you decide to go this way, as it could lead to very biased results. On the upside, it could let you pick up large, diffuse cancerous nuclei within the tumor regions that would otherwise be split into multiple cells. But that could also lead to missing these same types of cells outside the tumor annotations, while they were picked up within the tumor.

My nuclear signal for this image is in channel 7, so I’ll start with picking that as my nuclear detection channel. I mouse around while watching the lower right-hand corner (much easier in 0.2.0m4 with the increased text size setting!!!). The background is generally around 1 in channel 7, so I went with 2 as a threshold on a small area, and reduced the cell expansion due to the tightly packed nature of most of the nuclei.
image
Not perfect, but not bad given the ring-like nature of many of the nuclei. Next, I need relevant measurements to help classify the cells so that biological questions can be answered… which needs a biologist! Never attempt classification without someone who understands the biological questions and systems, as there are just so many things that can go wrong. Also, never trust analysis from anyone where you cannot see all the results, as small misunderstandings can lead to biased results throughout a large study. One antibody’s background is another antibody’s threshold. Iterating enough to get accurate classifications can be one of the most time consuming parts of this process.

Nuclear measurements are usually fairly straightforward, and the nuclear mean channel signal can generally be used. Cytoplasmic staining is far more problematic, as described here. While frequently very slow, I prefer to use subcellular detections to improve accuracy there, as the blind expansion of cytoplasm off of the nucleus can often pick up bits of nearby cells, or dilute the signal such that very small cells can look negative in low density areas. Imagine a T or B cell with a 5 micron cytoplasmic expansion… most of that would be empty space or even other cells.

Still, we have a cell detection that works, so let’s generate at least a few classes to see how it works.

For this version of QuPath, I add two lines to the end of the previous script.

selectAnnotations()
runPlugin('qupath.imagej.detect.nuclei.WatershedCellDetection', '{"detectionImageFluorescence": 7,  "requestedPixelSizeMicrons": 0.5,  "backgroundRadiusMicrons": 0.0,  "medianRadiusMicrons": 0.0,  "sigmaMicrons": 1.5,  "minAreaMicrons": 10.0,  "maxAreaMicrons": 400.0,  "threshold": 2.0,  "watershedPostProcess": true,  "cellExpansionMicrons": 3.0,  "includeNuclei": true,  "smoothBoundaries": true,  "makeMeasurements": true}');

image

Channel 3 seemed like a good one to start with, being a fairly solidly nuclear marker, whatever it is. *Looking at this in 0.2.0m4, I can see that channel 3 was FoxP3
image
Using Measure->Show Measurement Maps, I can zip around while toggling detections on and off using the H or D key (depending on version) and see where my threshold should be. Dragging the ball on the lower track to the left will change the number in the lower right (the max value), and anything above this value will turn “red” in the measurement maps view, showing what objects are above that particular threshold.


Having no idea what any of this means biologically, I settled on a value of 2.3 for the Nucleus: Channel 3 mean as my threshold for positivity. Mostly this was handled by looking for consistently strong staining, and dragging the lower bar back and forth and seeing when various cells “turned red.” That would indicate that they were near or above the “Max” value, which is 1.32 in the above images.

*Ideally I would prefer a color map that had more of a range indicator, and we have the ability to make these now in M4! Below is an example of a mostly Viridis-like color map, but with a single, hard cutoff at either the top or bottom of the map. Similar to how a Range Indicator works in some microscopy software, anything below the Min threshold turns black, while anything above the Max threshold turns orange, with everything in the middle being shades of blue, green or yellow. Supposedly this should be fairly color-blind friendly as well.
image
With channel 6 being a pretty solid stain for cytokeratin, I figured I would use cytoplasmic mean for 6 and I decided to take a shot at channel 2 (edit: turns out to be CD8), a less frequent cytoplasmic stain. Since it is cytoplasmic, and nuclear segmentation was frequently problematic in this sample, I went with a double threshold by using Subcellular Detection. This choice allowed me to both set an intensity threshold (eliminate background), and an area threshold (eliminate overlap with positive cells), as measuring cytoplasmic intensities can be problematic with QuPath’s blind cytoplasmic expansion from the nucleus. More information on that here if you are interested.


By setting my expected spot size to 1 square micron, I can use the “Estimated spot count” as an area measurement (3.5 estimated spots would be 3.5 square microns of positive area detected). My threshold of 1.0 was somewhat arbitrary, but would be best set with a biologist familiar with the staining, cells, or tissue present! I always use Split by shape for cytoplasmic detections, while I use split by intensity for spot counts (ISH). Smooth before detection- give it a try, see if you like the results. To taste.

Note that the subcellular detections command will run on whatever is selected. So, whether that is a single cell, or an annotation, be careful what you have selected when you run it! In version 0.1.3, if you click into empty space to try to unselect, you will likely select an annotation near the edge instead, and may find you have run the command on just that annotation. Also, it can be quite slow, especially on systems built for high throughput as it will bottleneck on a single thread part of the process.

Once it is done, you have a new value to look at in Show Measurement Maps, Subcellular: Channel 2: Num spots estimated. I ended up picking a value of 30 square microns as a decent threshold.

Repeat for as many channels as you want!

For this walkthrough, I stopped with the above mentioned markers, having one nuclear, one cytoplasmic, and my tumor marker, for a total of 3 base types. I find it easiest to classify all of the cells at once with the multiplex classifier described here, though there are other options listed as well!
image
image

I both saved and ran the classifier, as shown. The saved classifier will come in handy, as I would like to run this across the whole project!

From the annotation measurements I could see that there were three Orange,Yellow positive class cells, meaning they had both strong Channel 2 and Channel 3 positivity. From the detections list, I could order by class and then find the exact locations of these cells.
image

Now that I have my cells classified, though, those pesky subcellular detections are just getting in the way, and could be problematic in terms of detection counts. To eliminate them (once I have checked to my, and the biologist’s, satisfaction that they are accurate!), I run another subcellular detection with all channel fields set to -1.

A couple of other tools that will make this process a bit easier!

First, if you have multiple images, you may want to have the same display settings between images, so that you can better estimate the shape of staining and background. For many versions of QuPath, it will autoestimate the display settings whenever you open an image, though more recent versions allow you to save the display settings. To be sure you are using the same display settings in each image, you can use a script to manually set the min and max threshold for each channel.

This script will let you recolor all of the channels (not contrast, as above, but change red to blue). Usually not necessary, but sometimes during an import or export process, you may want to adjust the exact color for each channel. It also gives you some more options for generating publication images, if you want to brighten up a particular channel (I frequently like showing the nuclear channel as “white”). You can very easily combine the first two scripts into a single script to recolor and set thresholds consistently for your images. Anything that requires the viewer, though, will not work when run across a project!

One final script, also referenced in the multiplex classifier post, will allow you to recolor or rename particular classes. Certain lines from this script could be extracted to convert all classification names within a project. “Ch2,Ch5 positive,” for example, might be less informative to someone reviewing the accuracy of the classifications than “Tissue resident memory T cell.”

**Warning-Also included in the multiplex classifier now. The classifier doesn’t know by default what the correct order “should” be for labels, and you might end up with Ch2,Ch5 positive in one image, and Ch5,Ch2 in another, depending on the order in which the cells were processed. If it becomes enough of an issue I might dig around in the code to find a work around, but for now, just be aware that a script to edit the class names might come in handy.

Anyway, great, but we used a GUI to do this classification. We can’t really do that over many images efficiently, even if we can load the classifier to edit it later. Like with the tissue detection, there is a non-GUI version of the script where all you need to do is type in the name of the classifier that was saved, and tack that script on to the end of your ever growing workflow script!

Next I can run a script to generate some summary measurements (cells per mm^2, percentage counts) per annotation area, and another to export the results to a text file. One final script can be run to generate a summary text file by combining the previous single text files per image. I have an adjusted version here that doesn’t generate a popup window, but assumes all of the single image text files are in the “annotation results” folder. One potential error you might run into with this part of the script is that if you are viewing a previous version of the combined results file, the attempt to write that file will fail due to it currently being open.

All of these scripts can be combined together to form one script that can be used on the whole project. In the script editor, Automate->Show script editor, you can Run->Run for project. If the metadata groups are different enough, the metadata can be used to set variables within the script. For example, an if then statement could be used to check for metadata Group == 3, and to use a lower DAPI threshold due to (well, not in this case since all the images are the same!) weak staining there.
image
Looking at this, I probably should not have named both the annotation and the cell class as “Tumor,” but that is something that can also be changed fairly quickly! Plan things out, and be better than I was!

I suspect there will be maaany problems getting this all to work for different versions of QuPath, but ask away.

3 Likes

The final script, with some notes and steps. These scripts were all just mashed on top of one another, with intro statements and comments intact.

//Step 1, Generate the tissue level annotations, and remove any prevoius objects in case the script is being run a second time.
clearAllObjects()

//v1.0
//This version strips out the user interface and most options, and replaces them with variables at the beginning of the script.
//I recommend using the UI version to figure out your settings, and this version to run as part of a workflow.
//Possibly place whole image annotation with another type
createSelectAllObject(true);
def sigma = 1
def downsample = 1
def lowerThreshold = 3

//calculate bit depth for initially suggested upper threhsold, replace the value with the Math.pow line or maxPixel variable
//int maxPixel = Math.pow((double) 2,(double)server.getBitsPerPixel())-1
def upperThreshold = 65535

double [] weights = [0,0,0,0,0,1,0,0]
def smallestAnnotations = 300
def fillHolesSmallerThan = 300
def removeLargerThan = 9999999999999999999

import qupath.lib.gui.QuPathGUI
import qupath.imagej.plugins.ImageJMacroRunner
import qupath.lib.plugins.parameters.ParameterList
import qupath.lib.roi.*
import qupath.lib.objects.*


def imageData = getCurrentImageData()
def server = imageData.getServer()
def pixelSize = server.getPixelHeightMicrons()

   //Place all of the final weights into an array that can be read into ImageJ
    //Normalize weights so that sum =1
    def sum = weights.sum()
    if (sum<=0){
        print "Please use positive weights"
        return;
    }
    for (i=0; i<weights.size(); i++){
        weights[i] = weights[i]/sum
    }
    
    //[1,2,3,4] format can't be read into ImageJ arrays (or at least I didn't see an easy way), it needs to be converted to 1,2,3,4
    def weightList =weights.join(", ")
    //Get rid of everything already in the image.  Not totally necessary, but useful when I am spamming various values.
    def annotations = getAnnotationObjects()

    def params = new ImageJMacroRunner(getQuPath()).getParameterList()

    // Change the value of a parameter, using the JSON to identify the key
    params.getParameters().get('downsampleFactor').setValue(downsample)
    params.getParameters().get('sendROI').setValue(false)
    params.getParameters().get('sendOverlay').setValue(false)
    params.getParameters().get('getOverlay').setValue(false)
    if (!getQuPath().getClass().getPackage()?.getImplementationVersion()){
        params.getParameters().get('getOverlayAs').setValue('Annotations')
    }
    params.getParameters().get('getROI').setValue(true)
    params.getParameters().get('clearObjects').setValue(false)

    // Get the macro text and other required variables
    def macro ='original = getImageID();run("Duplicate...", "title=X3t4Y6lEt duplicate");'+
    'weights=newArray('+weightList+');run("Stack to Images");name=getTitle();'+
    'baseName = substring(name, 0, lengthOf(name)-1);'+
    'for (i=0; i<'+weights.size()+';'+
    'i++){currentImage = baseName+(i+1);selectWindow(currentImage);'+
    'run("Multiply...", "value="+weights[i]);}'+
    'run("Images to Stack", "name=Stack title=[X3t4Y6lEt] use");'+
    'run("Z Project...", "projection=[Sum Slices]");'+
    'run("Gaussian Blur...", "sigma='+sigma+'");'+
    'setThreshold('+lowerThreshold+', '+upperThreshold+');run("Convert to Mask");'+
    'run("Create Selection");run("Colors...", "foreground=white background=black selection=white");'+
    'run("Properties...", "channels=1 slices=1 frames=1 unit=um pixel_width='+pixelSize+' pixel_height='+pixelSize+' voxel_depth=1");'+
    'selectImage(original);run("Restore Selection");'

    def macroRGB = 'weights=newArray('+weightList+');'+
    'original = getImageID();run("Duplicate...", " ");'+
    'run("Make Composite");run("Stack to Images");'+
    'selectWindow("Red");rename("Red X3t4Y6lEt");run("Multiply...", "value="+weights[0]);'+
    'selectWindow("Green");rename("Green X3t4Y6lEt");run("Multiply...", "value="+weights[1]);'+
    'selectWindow("Blue");rename("Blue X3t4Y6lEt");run("Multiply...", "value="+weights[2]);'+
    'run("Images to Stack", "name=Stack title=[X3t4Y6lEt] use");'+
    'run("Z Project...", "projection=[Sum Slices]");'+
    'run("Gaussian Blur...", "sigma='+sigma+'");'+
    'setThreshold('+lowerThreshold+', '+upperThreshold+');run("Convert to Mask");'+
    'run("Create Selection");run("Colors...", "foreground=white background=black selection=cyan");'+
    'run("Properties...", "channels=1 slices=1 frames=1 unit=um pixel_width='+pixelSize+' pixel_height='+pixelSize+' voxel_depth=1");'+
    'selectImage(original);run("Restore Selection");'


    for (annotation in annotations) {
        //Check if we need to use the RGB version
        if (imageData.getServer().isRGB()) {
            ImageJMacroRunner.runMacro(params, imageData, null, annotation, macroRGB)
        } else{ ImageJMacroRunner.runMacro(params, imageData, null, annotation, macro)}
    }

    //remove whole image annotation and lock the new annotation
    removeObjects(annotations,true)
//Option to remove small sized annotation areas. Requires pixel size


//Clip button goes with the Remove Small button on the dialog, to remove objects below the text box amount in um^2
        def areaAnnotations = getAnnotationObjects().findAll {it.getROI() instanceof AreaROI}

        for (section in areaAnnotations){
            
            def polygons = PathROIToolsAwt.splitAreaToPolygons(section.getROI())
            def newPolygons = polygons[1].collect {
                updated = it
                for (hole in polygons[0])
                    updated = PathROIToolsAwt.combineROIs(updated, hole, PathROIToolsAwt.CombineOp.SUBTRACT)
                return updated
            }
                    // Remove original annotation, add new ones
        annotations = newPolygons.collect {new PathAnnotationObject(it)}
        
        removeObject(section, true)
        addObjects(annotations)

            
        }




    //PART2


    double pixelWidth = server.getPixelWidthMicrons()
    double pixelHeight = server.getPixelHeightMicrons()
    def smallAnnotations = getAnnotationObjects().findAll {it.getROI().getScaledArea(pixelWidth, pixelHeight) < smallestAnnotations}
    println("small "+smallAnnotations)
    removeObjects(smallAnnotations, true)
    fireHierarchyUpdate()

    // Get selected objects
    // If you're willing to loop over all annotation objects, for example, then use getAnnotationObjects() instead
    def pathObjects = getAnnotationObjects()

    // Create a list of objects to remove, add their replacements
    def toRemove = []
    def toAdd = []
    for (pathObject in pathObjects) {
        def roi = pathObject.getROI()
        // AreaROIs are the only kind that might have holes
        if (roi instanceof AreaROI ) {
            // Extract exterior polygons
            def polygons = PathROIToolsAwt.splitAreaToPolygons(roi)[1] as List
            // If we have multiple polygons, merge them
            def roiNew = polygons.remove(0)
            def roiNegative = PathROIToolsAwt.splitAreaToPolygons(roi)[0] as List
            for (temp in polygons){
                roiNew = PathROIToolsAwt.combineROIs(temp, roiNew, PathROIToolsAwt.CombineOp.ADD)
            }
            for (temp in roiNegative){  
                if (temp.getArea() > fillHolesSmallerThan/pixelSize/pixelSize){
                    roiNew = PathROIToolsAwt.combineROIs(roiNew, temp, PathROIToolsAwt.CombineOp.SUBTRACT)
                }
            }
            // Create a new annotation
            toAdd << new PathAnnotationObject(roiNew, pathObject.getPathClass())
            toRemove << pathObject
        }
    }

// Remove & add objects as required
def hierarchy = getCurrentHierarchy()
hierarchy.getSelectionModel().clearSelection()
hierarchy.removeObjects(toRemove, true)
hierarchy.addPathObjects(toAdd, false)

def largeAnnotations = getAnnotationObjects().findAll {it.getROI().getScaledArea(pixelSize, pixelSize) > removeLargerThan}
removeObjects(largeAnnotations, true)

getAnnotationObjects().each{it.setLocked(true)}


//uncomment to merge final results into single line in annotations table
selectAnnotations()
mergeSelectedAnnotations()
tissue = getAnnotationObjects()
tissue[0].setPathClass(getPathClass("Tumor"))
makeInverseAnnotation(tissue[0])
stromaNaming = getAnnotationObjects()
stromaNaming.each{ if (it.getPathClass() != getPathClass("Tumor")){it.setPathClass(getPathClass("Stroma"))}}

println("Annotation areas completed")

//Step 2, the SLOW part, generate cells and subcellular detections

selectAnnotations()
runPlugin('qupath.imagej.detect.nuclei.WatershedCellDetection', '{"detectionImageFluorescence": 7,  "requestedPixelSizeMicrons": 0.5,  "backgroundRadiusMicrons": 0.0,  "medianRadiusMicrons": 0.0,  "sigmaMicrons": 1.5,  "minAreaMicrons": 10.0,  "maxAreaMicrons": 400.0,  "threshold": 2.0,  "watershedPostProcess": true,  "cellExpansionMicrons": 3.0,  "includeNuclei": true,  "smoothBoundaries": true,  "makeMeasurements": true}');
runPlugin('qupath.imagej.detect.cells.SubcellularDetection', '{"detection[Channel 1]": -1.0,  "detection[Channel 2]": 1.0,  "detection[Channel 3]": -1.0,  "detection[Channel 4]": -1.0,  "detection[Channel 5]": -1.0,  "detection[Channel 6]": -1.0,  "detection[Channel 7]": -1.0,  "detection[Channel 8]": -1.0,  "doSmoothing": false,  "splitByIntensity": false,  "splitByShape": false,  "spotSizeMicrons": 1.0,  "minSpotSizeMicrons": 0.5,  "maxSpotSizeMicrons": 2.0,  "includeClusters": true}');

// Step 3, Run the classifier that should be saved in your project's classifiers folder. The name should be set to whatever you named the classifier

//V3 Corrected classification over-write error on classifiers with more than 3 parts

import qupath.lib.gui.helpers.ColorToolsFX;
import javafx.scene.paint.Color;

//Hopefully you can simply replace the fileName with your classifier, and include this is a script.
fileName = "3Color"
positive = []
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",fileName)
    new File(path).withObjectInputStream {
        cObj = it.readObject()
    }
        //Create an arraylist with the same number of entries as classes
    CHANNELS = cObj.size()
    //println(cObj)

        //set up for classifier
        def cells = getCellObjects()
        
        cells.each {it.setPathClass(getPathClass('Negative'))}

        //start classifier with all cells negative

        for (def i=0; i<CHANNELS; i++){
            def lower = Float.parseFloat(cObj[i][1])
            def upper = Float.parseFloat(cObj[i][3])
            //create lists for each measurement, classify cells based off of those measurements
            positive[i] = cells.findAll {measurement(it, cObj[i][0]) >= lower && measurement(it, cObj[i][0]) <= upper}
            positive[i].each {it.setPathClass(getPathClass(cObj[i][2]+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)}
            c = Color.web(cObj[i][4])
            currentPathClass = getPathClass(cObj[i][2]+' positive')
            //for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes?
            currentPathClass.setColor(ColorToolsFX.getRGBA(c))
        }
        for (def i=0; i<(CHANNELS-1); i++){
            //println(i)
            int remaining = 0
            for (def j = i+1; j<CHANNELS; j++){
                remaining +=1
            }
            depth = 2
            classifier(cObj[i][2], positive[i], remaining, i)

        }

        fireHierarchyUpdate()
def classifier (listAName, listA, remainingListSize, position){
    //current point in the list of lists, allows access to the measurements needed to figure out what from the current class is also part of the next class
    for (def y=0; y <remainingListSize; y++){
        k = (position+y+1).intValue()
    // get the measurements needed to determine if a cell is a member of the next class (from listOfLists)
        def lower = Float.parseFloat(cObj[k][1])
        def upper = Float.parseFloat(cObj[k][3])
    //intersect the listA with the first of the listOfLists
    //on the first run, this would take all of Class 1, and compare it with measurements that determine Class 2, resulting in a subset of 
    //Class 1 that meet both criteria
        def passList = listA.findAll {measurement(it, cObj[k][0]) >= lower && measurement(it, cObj[k][0]) <= upper}
        newName = cObj[k][2]

    //Create a new name based off of the current name and the newly compared class
    // on the first runthrough this would give "Class 1,Class 2 positive"
        def mergeName = listAName+","+newName
        passList.each{
            if (it.getMeasurementList().getMeasurementValue("ClassDepth") < depth) {
                it.setPathClass(getPathClass(mergeName+' positive')); 
                it.getMeasurementList().putMeasurement("ClassDepth", depth)
            }
        }
         if (k == (positive.size()-1)){ 
        
            //println(passList.size()+"number of "+mergeName+" cells passed")
            for (def z=0; z<CHANNELS; z++){
                //println("before"+positive[z].size())
                
                positive[z] = positive[z].minus(passList)
                
                //println(z+" after "+positive[z].size())
            }
            depth -=1
            return;
        } else{ 
            def passAlong = remainingListSize-1
            //println("passAlong "+passAlong.size())
            //println("name for next " +mergeName)
            depth +=1
            classifier(mergeName, passList, passAlong, k)
        }
    }
}
//Step 4, with the cells classified, I can strip out the subcellular detections so they don't interfere with the Measurements script that comes next
selectAnnotations()
runPlugin('qupath.imagej.detect.cells.SubcellularDetection', '{"detection[Channel 1]": -1.0,  "detection[Channel 2]": -1.0,  "detection[Channel 3]": -1.0,  "detection[Channel 4]": -1.0,  "detection[Channel 5]": -1.0,  "detection[Channel 6]": -1.0,  "detection[Channel 7]": -1.0,  "detection[Channel 8]": -1.0,  "doSmoothing": false,  "splitByIntensity": false,  "splitByShape": false,  "spotSizeMicrons": 1.0,  "minSpotSizeMicrons": 0.5,  "maxSpotSizeMicrons": 2.0,  "includeClusters": true}');

//Step 5, and generate summary measurements with yet another pasted in script!

//Checks for all detections within a given annotation, DOES NOT EXCLUDE DETECTIONS WITHIN SUB-ANNOTATIONS.
//That last bit should make it compatible with trained classifiers.

import qupath.lib.objects.PathCellObject
imageData = getCurrentImageData()
server = imageData.getServer()
pixelSize = server.getPixelHeightMicrons()
Set classList = []
for (object in getAllObjects().findAll{it.isDetection() /*|| it.isAnnotation()*/}) {
    classList << object.getPathClass()
}
println(classList)
hierarchy = getCurrentHierarchy()

for (annotation in getAnnotationObjects()){
    totalCells = hierarchy.getDescendantObjects(annotation,null, PathCellObject)

    for (aClass in classList){
        if (aClass){
            if (totalCells.size() > 0){
                cells = hierarchy.getDescendantObjects(annotation,null, PathCellObject).findAll{it.getPathClass() == aClass}
                //annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells", cells.size())
                annotation.getMeasurementList().putMeasurement(aClass.getName()+" %", cells.size()*100/totalCells.size())

                annotationArea = annotation.getROI().getArea()
                annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells/mm^2", cells.size()/(annotationArea*pixelSize*pixelSize/1000000))
            } else {
                //annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells", 0)
                annotation.getMeasurementList().putMeasurement(aClass.getName()+" %", 0)
                annotation.getMeasurementList().putMeasurement(aClass.getName()+" cells/mm^2", 0)
            
            }
        }
    }

}
println("done")

//Step 6, generate summary text files per image

/*
 * QuPath v0.1.2 has some bugs that make exporting annotations a bit annoying, specifically it doesn't include the 'dot' 
 * needed in the filename if you run it in batch, and it might put the 'slashes' the wrong way on Windows.
 * Manually fixing these afterwards is not very fun.
 * 
 * Anyhow, until this is fixed you could try the following script with Run -> Run for Project.
 * It should create a new subdirectory in the project, and write text files containing results there.
 *
 * @author Pete Bankhead
 */

def name = getProjectEntry().getImageName() + '.txt'
def path = buildFilePath(PROJECT_BASE_DIR, 'annotation results')
mkdirs(path)
path = buildFilePath(path, name)
saveAnnotationMeasurements(path)
print 'Results exported to ' + path

//Step 7, Generate a single summary text file that can be opened in Excel that contains annotation level data for the whole project.
//This technically only needs to be run once, but you can also just tack it on the end like this, and it will run after each image and over-write itself.

/**
 * Script to combine results tables exported by QuPath.
 *
 * This is particularly intended to deal with the fact that results tables of annotations can produce results
 * with different column names, numbers and orders - making them awkward to combine later manually.
 *
 * It prompts for a directory containing exported text files, and then writes a new file in the same directory.
 * The name of the new file can be modified - see the first lines below.
 *
 * Note: This hasn't been tested very extensively - please check the results carefully, and report any problems so they
 * can be fixed!
 *
 * @author Pete Bankhead
 */

import qupath.lib.gui.QuPathGUI

// Some parameters you might want to change...
String ext = '.txt' // File extension to search for
String delimiter = '\t' // Use tab-delimiter (this is for the *input*, not the output)
String outputName = 'Combined_results.txt' // Name to use for output; use .csv if you really want comma separators

// Prompt for directory containing the results
//def dirResults = QuPathGUI.getSharedDialogHelper().promptForDirectory()
def dirResults = new File(buildFilePath(PROJECT_BASE_DIR, 'annotation results'))
if (dirResults == null)
    return
def fileResults = new File(dirResults, outputName)

// Get a list of all the files to merge
def files = dirResults.listFiles({
    File f -> f.isFile() &&
            f.getName().toLowerCase().endsWith(ext) &&
            f.getName() != outputName} as FileFilter)
if (files.size() <= 1) {
    print 'At least two results files needed to merge!'
    return
} else
    print 'Will try to merge ' + files.size() + ' files'

// Represent final results as a 'list of maps'
def results = new ArrayList<Map<String, String>>()

// Store all column names that we see - not all files necessarily have all columns
def allColumns = new LinkedHashSet<String>()
allColumns.add('File name')

// Loop through the files
for (file in files) {
    // Check if we have anything to read
    def lines = file.readLines()
    if (lines.size() <= 1) {
        print 'No results found in ' + file
        continue
    }
    // Get the header columns
    def iter = lines.iterator()
    def columns = iter.next().split(delimiter)
    allColumns.addAll(columns)
    // Create the entries
    while (iter.hasNext()) {
        def line = iter.next()
        if (line.isEmpty())
            continue
        def map = ['File name': file.getName()]
        def values = line.split(delimiter)
        // Check if we have the expected number of columns
        if (values.size() != columns.size()) {
            print String.format('Number of entries (%d) does not match the number of columns (%d)!', columns.size(), values.size())
            print('I will stop processing ' + file.getName())
            break
        }
        // Store the results
        for (int i = 0; i < columns.size(); i++)
            map[columns[i]] = values[i]
        results.add(map)
    }
}

// Create a new results file - using a comma delimiter if the extension is csv
if (outputName.toLowerCase().endsWith('.csv'))
    delimiter = ','
int count = 0
fileResults.withPrintWriter {
    def header = String.join(delimiter, allColumns)
    it.println(header)
    // Add each of the results, with blank columns for missing values
    for (result in results) {
        for (column in allColumns) {
            it.print(result.getOrDefault(column, ''))
            it.print(delimiter)
        }
        it.println()
        count++
    }
}

// Success!  Hopefully...
print 'Done! ' + count + ' result(s) written to ' + fileResults.getAbsolutePath()

TLDR version, aka, “I just want to play with the code blocks like LEGOs”

  1. Script for GUI tissue classification
    1.1 Script for non-GUI tissue classification, for your combined script.
    1.2 This could also be substituted with any other annotation creation step, like a simple createSelectAllObject(true); or a Simple tissue detection command in brightfield.

  2. Make sure your annotations are selected, and generate cells, subcells, etc. These lines you can get from the Workflow tab by creating a script of all of the conditions you have tested. Copy and paste what you need.

  3. GUI Classifier script here, and non-GUI classifier for the combined script here. Can also replace this with a trained classifier using the runClassifier("path to file.qpclassifier") command. Or for simple projects, use setCellIntensityClassifications("Measurement name here", thresholdValueHere).

    3.1. Remove any extra subcellular detections if desired, and if you do want to rename any classes (as a result of the awkward names from the multiplex classifier), that should be done now, before measurement summaries are generated.

  4. Run the script to generate percent positive and cell density measurements for each annotation.

  5. Create individual text files per image, with one line per annotation.

  6. Create a summary text file called combined_results.txt, which can either be run separately, once, or tacked on to the end of the script so that it keeps overwriting itself. Original version of script here.

I love this workflow however I made some changes to the tissue classification. Rather than having the rest of the image be defined as the stroma (which overestimates the actual area), I’ve tweaked the script to have two non-overlapping annotations for tumor and stroma (with no blackspace). After detecting the tumor tissue and creating an inverse annotation, the tumor annotation is saved as an object then deleted. I then performed another round of tissue classification using the inverse annotation but this time with a low threshold for DAPI. Once the inverse annotation is ‘trimmed down’ this way to only include area containing nuclei I added the tumor annotation back into the image.

I’ve included the modified script below. There are probably a few lines throughout that are reiterative since I mostly just stacked two slightly different versions of the non-GUI tissue classifier on top of each other and made sure not to declare the same variables twice.

My PanCK was on Channel 4 and DAPI on Channel 5.

type or paste code h//v1.2 WATCH FOR COMPLETION MESSAGE IN LOG, TAKES A LONG TIME IN LARGE IMAGES

//This version strips out the user interface and most options, and replaces them with variables at the beginning of the script.
//I recommend using the GUI version to figure out your settings, and this version to run as part of a workflow.
//Possibly place whole image annotation with another type
createSelectAllObject(true);
def sigma = 4
def downsample = 1
def lowerThreshold = 5.0

//calculate bit depth for initially suggested upper threhsold, replace the value with the Math.pow line or maxPixel variable
//int maxPixel = Math.pow((double) 2,(double)server.getBitsPerPixel())-1
def upperThreshold = 2147483647

double [] weights = [0,0,0,1,0,0]
//Remove smaller than
def smallestAnnotations = 300
def fillHolesSmallerThan = 1000
//For detection of small objects, not included in GUI version.
def removeLargerThan = 99999999999999

import qupath.lib.gui.QuPathGUI
import qupath.imagej.plugins.ImageJMacroRunner
import qupath.lib.plugins.parameters.ParameterList
import qupath.lib.roi.*
import qupath.lib.objects.*


def imageData = getCurrentImageData()
def server = imageData.getServer()
def pixelSize = server.getPixelHeightMicrons()

//Place all of the final weights into an array that can be read into ImageJ
//Normalize weights so that sum =1
def sum = weights.sum()
if (sum<=0){
    print "Please use positive weights"
    return;
}
for (i=0; i<weights.size(); i++){
    weights[i] = weights[i]/sum
}

//[1,2,3,4] format can't be read into ImageJ arrays (or at least I didn't see an easy way), it needs to be converted to 1,2,3,4
def weightList =weights.join(", ")
//Get rid of everything already in the image.  Not totally necessary, but useful when I am spamming various values.
def annotations = getAnnotationObjects()

def params = new ImageJMacroRunner(getQuPath()).getParameterList()

// Change the value of a parameter, using the JSON to identify the key
params.getParameters().get('downsampleFactor').setValue(downsample)
params.getParameters().get('sendROI').setValue(false)
params.getParameters().get('sendOverlay').setValue(false)
params.getParameters().get('getOverlay').setValue(false)
if (!getQuPath().getClass().getPackage()?.getImplementationVersion()){
    params.getParameters().get('getOverlayAs').setValue('Annotations')
}
params.getParameters().get('getROI').setValue(true)
params.getParameters().get('clearObjects').setValue(false)

// Get the macro text and other required variables
def macro ='original = getImageID();run("Duplicate...", "title=X3t4Y6lEt duplicate");'+
        'weights=newArray('+weightList+');run("Stack to Images");name=getTitle();'+
        'baseName = substring(name, 0, lengthOf(name)-1);'+
        'for (i=0; i<'+weights.size()+';'+
        'i++){currentImage = baseName+(i+1);selectWindow(currentImage);'+
        'run("Multiply...", "value="+weights[i]);}'+
        'run("Images to Stack", "name=Stack title=[X3t4Y6lEt] use");'+
        'run("Z Project...", "projection=[Sum Slices]");'+
        'run("Gaussian Blur...", "sigma='+sigma+'");'+
        'setThreshold('+lowerThreshold+', '+upperThreshold+');run("Convert to Mask");'+
        'run("Create Selection");run("Colors...", "foreground=white background=black selection=white");'+
        'run("Properties...", "channels=1 slices=1 frames=1 unit=um pixel_width='+pixelSize+' pixel_height='+pixelSize+' voxel_depth=1");'+
        'selectImage(original);run("Restore Selection");'

def macroRGB = 'weights=newArray('+weightList+');'+
        'original = getImageID();run("Duplicate...", " ");'+
        'run("Make Composite");run("Stack to Images");'+
        'selectWindow("Red");rename("Red X3t4Y6lEt");run("Multiply...", "value="+weights[0]);'+
        'selectWindow("Green");rename("Green X3t4Y6lEt");run("Multiply...", "value="+weights[1]);'+
        'selectWindow("Blue");rename("Blue X3t4Y6lEt");run("Multiply...", "value="+weights[2]);'+
        'run("Images to Stack", "name=Stack title=[X3t4Y6lEt] use");'+
        'run("Z Project...", "projection=[Sum Slices]");'+
        'run("Gaussian Blur...", "sigma='+sigma+'");'+
        'setThreshold('+lowerThreshold+', '+upperThreshold+');run("Convert to Mask");'+
        'run("Create Selection");run("Colors...", "foreground=white background=black selection=cyan");'+
        'run("Properties...", "channels=1 slices=1 frames=1 unit=um pixel_width='+pixelSize+' pixel_height='+pixelSize+' voxel_depth=1");'+
        'selectImage(original);run("Restore Selection");'


for (annotation in annotations) {
    //Check if we need to use the RGB version
    if (imageData.getServer().isRGB()) {
        ImageJMacroRunner.runMacro(params, imageData, null, annotation, macroRGB)
    } else{ ImageJMacroRunner.runMacro(params, imageData, null, annotation, macro)}
}

//remove whole image annotation and lock the new annotation
removeObjects(annotations,true)
//Option to remove small sized annotation areas. Requires pixel size


//Clip button goes with the Remove Small button on the dialog, to remove objects below the text box amount in um^2
def areaAnnotations = getAnnotationObjects().findAll {it.getROI() instanceof AreaROI}

for (section in areaAnnotations){

    def polygons = PathROIToolsAwt.splitAreaToPolygons(section.getROI())
    def newPolygons = polygons[1].collect {
        updated = it
        for (hole in polygons[0])
            updated = PathROIToolsAwt.combineROIs(updated, hole, PathROIToolsAwt.CombineOp.SUBTRACT)
        return updated
    }
    // Remove original annotation, add new ones
    annotations = newPolygons.collect {new PathAnnotationObject(it)}

    removeObject(section, true)
    addObjects(annotations)


}




//PART2


double pixelWidth = server.getPixelWidthMicrons()
double pixelHeight = server.getPixelHeightMicrons()
def smallAnnotations = getAnnotationObjects().findAll {it.getROI().getScaledArea(pixelWidth, pixelHeight) < smallestAnnotations}
println("small "+smallAnnotations)
removeObjects(smallAnnotations, true)
fireHierarchyUpdate()

// Get selected objects
// If you're willing to loop over all annotation objects, for example, then use getAnnotationObjects() instead
def pathObjects = getAnnotationObjects()

// Create a list of objects to remove, add their replacements
def toRemove = []
def toAdd = []
for (pathObject in pathObjects) {
    def roi = pathObject.getROI()
    // AreaROIs are the only kind that might have holes
    if (roi instanceof AreaROI ) {
        // Extract exterior polygons
        def polygons = PathROIToolsAwt.splitAreaToPolygons(roi)[1] as List
        // If we have multiple polygons, merge them
        def roiNew = polygons.remove(0)
        def roiNegative = PathROIToolsAwt.splitAreaToPolygons(roi)[0] as List
        for (temp in polygons){
            roiNew = PathROIToolsAwt.combineROIs(temp, roiNew, PathROIToolsAwt.CombineOp.ADD)
        }
        for (temp in roiNegative){
            if (temp.getArea() > fillHolesSmallerThan/pixelSize/pixelSize){
                roiNew = PathROIToolsAwt.combineROIs(roiNew, temp, PathROIToolsAwt.CombineOp.SUBTRACT)
            }
        }
        // Create a new annotation
        toAdd << new PathAnnotationObject(roiNew, pathObject.getPathClass())
        toRemove << pathObject
    }
}

// Remove & add objects as required
def hierarchy = getCurrentHierarchy()
hierarchy.getSelectionModel().clearSelection()
hierarchy.removeObjects(toRemove, true)
hierarchy.addPathObjects(toAdd, false)

def largeAnnotations = getAnnotationObjects().findAll {it.getROI().getScaledArea(pixelSize, pixelSize) > removeLargerThan}
removeObjects(largeAnnotations, true)

getAnnotationObjects().each{it.setLocked(true)}


//uncomment to merge final results into single line in annotations table
selectAnnotations()
mergeSelectedAnnotations()
println("Annotation areas completed")

tissue = getAnnotationObjects()
tissue[0].setPathClass(getPathClass("Tumor"))

tumorAnnotation = getAnnotationObjects()

makeInverseAnnotation(tissue[0])

removeObjects(tumorAnnotation,true)

////////////////////////////////////////////SECOND STEP//////////////////////////////////////////////////////////////

//v1.2 WATCH FOR COMPLETION MESSAGE IN LOG, TAKES A LONG TIME IN LARGE IMAGES

//This version strips out the user interface and most options, and replaces them with variables at the beginning of the script.
//I recommend using the GUI version to figure out your settings, and this version to run as part of a workflow.
//Possibly place whole image annotation with another type
sigma = 4
downsample = 1
lowerThreshold = 0.5

//calculate bit depth for initially suggested upper threhsold, replace the value with the Math.pow line or maxPixel variable
//int maxPixel = Math.pow((double) 2,(double)server.getBitsPerPixel())-1
upperThreshold = 2147483647

weights = [0,0,0,0,1,0]
//Remove smaller than
smallestAnnotations = 300
fillHolesSmallerThan = 1000
//For detection of small objects, not included in GUI version.
removeLargerThan = 99999999999999

//Place all of the final weights into an array that can be read into ImageJ
//Normalize weights so that sum =1
sum = weights.sum()
if (sum<=0){
    print "Please use positive weights"
    return;
}
for (i=0; i<weights.size(); i++){
    weights[i] = weights[i]/sum
}

//[1,2,3,4] format can't be read into ImageJ arrays (or at least I didn't see an easy way), it needs to be converted to 1,2,3,4
weightList =weights.join(", ")
//Get rid of everything already in the image.  Not totally necessary, but useful when I am spamming various values.
annotations = getAnnotationObjects()

params = new ImageJMacroRunner(getQuPath()).getParameterList()

// Change the value of a parameter, using the JSON to identify the key
params.getParameters().get('downsampleFactor').setValue(downsample)
params.getParameters().get('sendROI').setValue(false)
params.getParameters().get('sendOverlay').setValue(false)
params.getParameters().get('getOverlay').setValue(false)
if (!getQuPath().getClass().getPackage()?.getImplementationVersion()){
    params.getParameters().get('getOverlayAs').setValue('Annotations')
}
params.getParameters().get('getROI').setValue(true)
params.getParameters().get('clearObjects').setValue(false)

// Get the macro text and other required variables
macro ='original = getImageID();run("Duplicate...", "title=X3t4Y6lEt duplicate");'+
        'weights=newArray('+weightList+');run("Stack to Images");name=getTitle();'+
        'baseName = substring(name, 0, lengthOf(name)-1);'+
        'for (i=0; i<'+weights.size()+';'+
        'i++){currentImage = baseName+(i+1);selectWindow(currentImage);'+
        'run("Multiply...", "value="+weights[i]);}'+
        'run("Images to Stack", "name=Stack title=[X3t4Y6lEt] use");'+
        'run("Z Project...", "projection=[Sum Slices]");'+
        'run("Gaussian Blur...", "sigma='+sigma+'");'+
        'setThreshold('+lowerThreshold+', '+upperThreshold+');run("Convert to Mask");'+
        'run("Create Selection");run("Colors...", "foreground=white background=black selection=white");'+
        'run("Properties...", "channels=1 slices=1 frames=1 unit=um pixel_width='+pixelSize+' pixel_height='+pixelSize+' voxel_depth=1");'+
        'selectImage(original);run("Restore Selection");'

macroRGB = 'weights=newArray('+weightList+');'+
        'original = getImageID();run("Duplicate...", " ");'+
        'run("Make Composite");run("Stack to Images");'+
        'selectWindow("Red");rename("Red X3t4Y6lEt");run("Multiply...", "value="+weights[0]);'+
        'selectWindow("Green");rename("Green X3t4Y6lEt");run("Multiply...", "value="+weights[1]);'+
        'selectWindow("Blue");rename("Blue X3t4Y6lEt");run("Multiply...", "value="+weights[2]);'+
        'run("Images to Stack", "name=Stack title=[X3t4Y6lEt] use");'+
        'run("Z Project...", "projection=[Sum Slices]");'+
        'run("Gaussian Blur...", "sigma='+sigma+'");'+
        'setThreshold('+lowerThreshold+', '+upperThreshold+');run("Convert to Mask");'+
        'run("Create Selection");run("Colors...", "foreground=white background=black selection=cyan");'+
        'run("Properties...", "channels=1 slices=1 frames=1 unit=um pixel_width='+pixelSize+' pixel_height='+pixelSize+' voxel_depth=1");'+
        'selectImage(original);run("Restore Selection");'


for (annotation in annotations) {
    //Check if we need to use the RGB version
    if (imageData.getServer().isRGB()) {
        ImageJMacroRunner.runMacro(params, imageData, null, annotation, macroRGB)
    } else{ ImageJMacroRunner.runMacro(params, imageData, null, annotation, macro)}
}

//remove whole image annotation and lock the new annotation
removeObjects(annotations,true)
//Option to remove small sized annotation areas. Requires pixel size


//Clip button goes with the Remove Small button on the dialog, to remove objects below the text box amount in um^2
areaAnnotations = getAnnotationObjects().findAll {it.getROI() instanceof AreaROI}

for (section in areaAnnotations){

    def polygons = PathROIToolsAwt.splitAreaToPolygons(section.getROI())
    def newPolygons = polygons[1].collect {
        updated = it
        for (hole in polygons[0])
            updated = PathROIToolsAwt.combineROIs(updated, hole, PathROIToolsAwt.CombineOp.SUBTRACT)
        return updated
    }
    // Remove original annotation, add new ones
    annotations = newPolygons.collect {new PathAnnotationObject(it)}

    removeObject(section, true)
    addObjects(annotations)


}




//PART2


pixelWidth = server.getPixelWidthMicrons()
pixelHeight = server.getPixelHeightMicrons()
smallAnnotations = getAnnotationObjects().findAll {it.getROI().getScaledArea(pixelWidth, pixelHeight) < smallestAnnotations}
println("small "+smallAnnotations)
removeObjects(smallAnnotations, true)
fireHierarchyUpdate()

// Get selected objects
// If you're willing to loop over all annotation objects, for example, then use getAnnotationObjects() instead
pathObjects = getAnnotationObjects()

// Create a list of objects to remove, add their replacements
toRemove = []
toAdd = []
for (pathObject in pathObjects) {
    def roi = pathObject.getROI()
    // AreaROIs are the only kind that might have holes
    if (roi instanceof AreaROI ) {
        // Extract exterior polygons
        def polygons = PathROIToolsAwt.splitAreaToPolygons(roi)[1] as List
        // If we have multiple polygons, merge them
        def roiNew = polygons.remove(0)
        def roiNegative = PathROIToolsAwt.splitAreaToPolygons(roi)[0] as List
        for (temp in polygons){
            roiNew = PathROIToolsAwt.combineROIs(temp, roiNew, PathROIToolsAwt.CombineOp.ADD)
        }
        for (temp in roiNegative){
            if (temp.getArea() > fillHolesSmallerThan/pixelSize/pixelSize){
                roiNew = PathROIToolsAwt.combineROIs(roiNew, temp, PathROIToolsAwt.CombineOp.SUBTRACT)
            }
        }
        // Create a new annotation
        toAdd << new PathAnnotationObject(roiNew, pathObject.getPathClass())
        toRemove << pathObject
    }
}

// Remove & add objects as required
hierarchy = getCurrentHierarchy()
hierarchy.getSelectionModel().clearSelection()
hierarchy.removeObjects(toRemove, true)
hierarchy.addPathObjects(toAdd, false)

largeAnnotations = getAnnotationObjects().findAll {it.getROI().getScaledArea(pixelSize, pixelSize) > removeLargerThan}
removeObjects(largeAnnotations, true)

getAnnotationObjects().each{it.setLocked(true)}


//uncomment to merge final results into single line in annotations table
selectAnnotations()
mergeSelectedAnnotations()

println("Annotation areas completed")

tissue = getAnnotationObjects()
tissue[0].setPathClass(getPathClass("Stroma"))

addObjects(tumorAnnotation)
1 Like

Yep!

Nice work, and I’m glad that someone is using this!

In this particular case I wouldn’t have treated any of that as empty space since the autofluorescence channel (below) seems to indicate that it is all tissue and not non-tissue black space (which I would usually want to remove!). Decisions like that are important to discuss, though, and can have significant impacts on the results. In this case, I would expect the stromal tissue to have a lower density of cells since it, well, normally has a lower density of cells. I am not really sure what the “right” answer is, but I would expect it to differ from project to project!

1 Like

Updated version of the GUI script that will work with 0.2.0m5 available here.

And for workflows in 0.2.0m5.

For some reason it doesn’t work quite right with the LuCa image, as the Autofluorescent channel, when multiplied by 0, ends up being 13 after rebuilding the stack. Not sure what is going on there, but it has worked for other Vectra and IF images I have tested it on…

And the instructions for creating your own color maps/LUTs! I was wondering why I was having such a hard time tracking down where it had been posted.

Hello !
I am pathologist in France and i try to manipulate QuPath. I am doing multiplex stainings.
I have a question about quantitative analysis. I would like to analyse cells with double or triple stainings. For example, i would like to count CD3+ FOXP3+ cells, or CD3+Ki67+cells. And i would like to see how many CD3+ cells are also CD8+. I manage to have results for each single marker but not to associate them.
How can i do that with QuPath ?
Thank for your help !

I would recommend looking at:



Oh wait, that IS this thread… so the answer is here. You would need to be more specific about what isn’t working if you are having trouble.

Hi, I have a multiplex acquisition composed of 40 tiff files, one per marker. The images are already aligned. Is it possible to open them in a project and view them as if they were the layers of a single image (ideally without converting my images to one ome.tiff file) ?

It seems easiest from the perspective of any quantification to form a single image, but if you want to load them all into a project and are running M9, you should be able to with the Analyze->Interactive image alignment command. You need to open 39 Interactive alignment windows though, and select each of the other channels to do it through the GUI. It would be much easier to make the single image if each file can be opened in FIJI.

I believe Pete may have written a script to do something similar automatically, somewhere, but I don’t know where.

Hello,

is there a way to load a trained classifier via scripting? (latest qupath version 0.2.0-m9)

Say I have trained a classifier, instead of opening each image on the project and applying the object classifier which is linked to the project, I would just want to put the command in a script and run for the entire project.

Many thanks for the help

See Scripting .json classifiers in QuPath (0.2.0-m9)

2 Likes

Hello,

I tried to use your for Tissuedetection with GUI script but I get the following error:

INFO: Starting script at Mon Apr 27 12:39:14 CEST 2020
ERROR: MissingMethodException at line 35: No signature of method: qupath.lib.images.servers.bioformats.BioFormatsImageServer.getBitsPerPixel() is applicable for argument types: () values: []

ERROR: Script error (MissingMethodException)
    at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:70)
    at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:46)
    at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
    at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
    at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:130)
    at Script2.run(Script2.groovy:36)
    at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:317)
    at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:155)
    at qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:893)

What do i have to change to get it running?

Best regards,
Stefan

1 Like

That script is no longer being updated as both Train pixel classifier and Create simple thresholder work with scripting or through the GUI in M10.

In M9, you could get approximately the same functionality out of the pixel classifier.