Preserving rounds of cell detection?

Hello again!

After solving the detection problem from my previous post with the help of @Research_Associate, I have now been able to successfully quantify two different cell types as well as the intracellular particles I was looking for. However, I am finding it difficult to integrate all information together. I have tried two different approaches by frankensteining some scripts I found both in this forum and in the QuPath github, and I feel that while either of them should theoretically work, I’m missing one or several crucial steps that make the approaches break down.

EDIT: I’m using QuPath 0.2.0-m9.

1) Large annotation as a base: 2 rounds of cell detection followed by 1 round of fast cell counts. This approach works very well in that QuPath can easily handle large numbers of cell/detection objects (the first round of cell detection typically ranges from 50.000 to 200.000 detections and the FCC round can go up to 700.000). However, each new round of detection removes the cell/detection objects from the previous round, making it impossible to resolve the hierarchy in the end. I understand that this is an essential feature of QuPath when it comes to tissue analysis, but is there any way to avoid object removal between steps?

import org.locationtech.jts.geom.Geometry
import qupath.lib.common.GeneralTools
import qupath.lib.objects.PathObject
import qupath.lib.objects.PathObjects
import qupath.lib.roi.GeometryTools
import qupath.lib.roi.ROIs
import static qupath.lib.gui.scripting.QPEx.*
import qupath.lib.objects.PathAnnotationObject
import qupath.lib.objects.PathDetectionObject
import java.awt.Rectangle
import java.awt.geom.Area

setImageType('BRIGHTFIELD_OTHER');

toRemove = getAnnotationObjects()
removeObjects(toRemove, true)

runPlugin('qupath.imagej.detect.tissue.SimpleTissueDetection2', '{"threshold": 254,  "requestedPixelSizeMicrons": 20.0,  "minAreaMicrons": 800000.0,  "maxHoleAreaMicrons": 1000000.0,  "darkBackground": false,  "smoothImage": true,  "medianCleanup": true,  "dilateBoundaries": true,  "smoothCoordinates": true,  "excludeOnBoundary": true,  "singleAnnotation": false}');

def lockAnnotations = true


//Detect Cell type 1
selectAnnotations()
setColorDeconvolutionStains('{"Name" : "Cyto", "Stain 1" : "Hematoxylin", "Values 1" : "0.438 0.632 0.639 ", "Stain 2" : "DAB", "Values 2" : "0.0897558 0.96410716 0.24988253 ", "Stain 3" : "Residual", "Values 3" : "0.7746323 0.56158215 0.29080984 ", "Background" : " 255 255 255 "}');
runPlugin('qupath.imagej.detect.cells.WatershedCellDetection', '{"detectionImageBrightfield": "Hematoxylin OD",  "requestedPixelSizeMicrons": 0.4,  "backgroundRadiusMicrons": 10.0,  "medianRadiusMicrons": 2.0,  "sigmaMicrons": 2.0,  "minAreaMicrons": 10.0,  "maxAreaMicrons": 150.0,  "threshold": 0.01,  "maxBackground": 0.0,  "watershedPostProcess": true,  "excludeDAB": false,  "cellExpansionMicrons": 1,  "includeNuclei": false,  "smoothBoundaries": true,  "makeMeasurements": true}');
selectObjectsByMeasurement("Cell: Area > 100 OR Cell: Circularity < 0.7")
clearSelectedObjects()
def detections1 = getCellObjects()

//Detect cell type 2
selectAnnotations()
setColorDeconvolutionStains('{"Name" : "Cyto", "Stain 1" : "Hematoxylin", "Values 1" : "0.0897558 0.96410716 0.24988253 ", "Stain 2" : "DAB", "Values 2" : "0.1439689 0.56760013 0.81061894 ", "Stain 3" : "Residual", "Values 3" : "0.7746323 0.56158215 0.29080984 ", "Background" : " 255 255 255 "}');
runPlugin('qupath.imagej.detect.cells.WatershedCellDetection', '{"detectionImageBrightfield": "Hematoxylin OD",  "requestedPixelSizeMicrons": 0.3,  "backgroundRadiusMicrons": 8.0,  "medianRadiusMicrons": 0.0,  "sigmaMicrons": 1.2,  "minAreaMicrons": 20.0,  "maxAreaMicrons": 400.0,  "threshold": 0.05,  "maxBackground": 1.0,  "watershedPostProcess": true,  "excludeDAB": false,  "cellExpansionMicrons": 1,  "includeNuclei": false,  "smoothBoundaries": true,  "makeMeasurements": true}');
def detections2 = getCellObjects()

//FCC
selectAnnotations()
runPlugin('qupath.lib.algorithms.TilerPlugin', '{"tileSizeMicrons": 500.0,  "trimToROI": true,  "makeAnnotations": true,  "removeParentAnnotation": true}');
selectAnnotations()
setColorDeconvolutionStains('{"Name" : "Cyto", "Stain 1" : "Hematoxylin", "Values 1" : "0.7746323 0.56158215 0.29080984 ", "Stain 2" : "DAB", "Values 2" : "0.0897558 0.96410716 0.24988253 ", "Stain 3" : "Residual", "Values 3" : "0.1439689 0.56760013 0.81061894 ", "Background" : " 255 255 255 "}');
runPlugin('qupath.opencv.CellCountsCV', '{"stainChannel": "Hematoxylin",  "gaussianSigmaMicrons": 0.12,  "backgroundRadiusMicrons": 0.9,  "doDoG": true,  "threshold": 0.08,  "thresholdDAB": 0.2,  "detectionDiameter": 7.0}');
selectDetections();
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 0.01,  "region": "ROI",  "tileSizeMicrons": 25.0,  "colorOD": true,  "colorStain1": true,  "colorStain2": true,  "colorStain3": true,  "colorRed": false,  "colorGreen": false,  "colorBlue": false,  "colorHue": false,  "colorSaturation": false,  "colorBrightness": false,  "doMean": true,  "doStdDev": true,  "doMinMax": true,  "doMedian": true,  "doHaralick": false,  "haralickDistance": 1,  "haralickBins": 32}');
selectObjectsByMeasurement("ROI: 0.01 µm per pixel: OD Sum: Mean > 1")
clearSelectedObjects()
selectDetections();
runPlugin('qupath.lib.plugins.objects.ShapeFeaturesPlugin', '{"area": true,  "perimeter": true,  "circularity": false,  "useMicrons": true}');
def detections3 = getDetectionObjects()
//print "Detection 3 Done"
selectAnnotations()
mergeSelectedAnnotations()

2) Small annotation as a base: 2 separate rounds of cell detection and conversion to annotations, FCC within the cell object-generated annotations. This approach works perfectly for small numbers of cells but quickly balloons in memory use and time elapsed when expanded to larger areas (which by themselves are still < 5% of my total area under study). If this is the only approach I can use, should I extensively tile the initial large annotation and then remove objects from memory as QuPath approaches each tile? Or is there a better approach?

import org.locationtech.jts.geom.Geometry
import qupath.lib.common.GeneralTools
import qupath.lib.objects.PathObject
import qupath.lib.objects.PathObjects
import qupath.lib.roi.GeometryTools
import qupath.lib.roi.ROIs
import qupath.lib.roi.RoiTools
import static qupath.lib.gui.scripting.QPEx.*
import qupath.lib.objects.PathAnnotationObject
import qupath.lib.objects.PathDetectionObject
import java.awt.Rectangle
import java.awt.geom.Area

resolveHierarchy()

setImageType('BRIGHTFIELD_OTHER');

//The following annotates the region under study
def region = getPathClass('Region')
def topLevel = getCurrentHierarchy().getRootObject().getChildObjects()
getCurrentHierarchy().getSelectionModel().setSelectedObjects(topLevel, null)
selected = getSelectedObjects()
for (def annotation in selected){
annotation.setPathClass(region)
}
fireHierarchyUpdate()


//Detect cell type 1
selectObjects { it.getPathClass() == getPathClass("Region") }
setColorDeconvolutionStains('{"Name" : "Cyto", "Stain 1" : "Hematoxylin", "Values 1" : "0.438 0.632 0.639 ", "Stain 2" : "DAB", "Values 2" : "0.0897558 0.96410716 0.24988253 ", "Stain 3" : "Residual", "Values 3" : "0.7746323 0.56158215 0.29080984 ", "Background" : " 255 255 255 "}');
runPlugin('qupath.imagej.detect.cells.WatershedCellDetection', '{"detectionImageBrightfield": "Hematoxylin OD",  "requestedPixelSizeMicrons": 0.4,  "backgroundRadiusMicrons": 10.0,  "medianRadiusMicrons": 2.0,  "sigmaMicrons": 2.0,  "minAreaMicrons": 10.0,  "maxAreaMicrons": 150.0,  "threshold": 0.01,  "maxBackground": 0.0,  "watershedPostProcess": true,  "excludeDAB": false,  "cellExpansionMicrons": 1,  "includeNuclei": false,  "smoothBoundaries": true,  "makeMeasurements": true}');
selectObjectsByMeasurement("Cell: Area > 100 OR Cell: Circularity < 0.7")
clearSelectedObjects()
def detections1 = getCellObjects()
Type1 = getPathClass('Type1')
getCellObjects().each {
    it.setPathClass(Type1)
}
// Create new annotations with the same ROIs and classifications as the detections
def detections = getDetectionObjects()
def newAnnotations = detections.collect {detection -> new PathAnnotationObject(detection.getROI(), detection.getPathClass())}
// Remove the detections, add the annotations
removeObjects(detections, false)
addObjects(newAnnotations)
fireHierarchyUpdate()

//Detect cell type 2
selectObjects { it.getPathClass() == getPathClass("Region") }
setColorDeconvolutionStains('{"Name" : "Cyto", "Stain 1" : "Hematoxylin", "Values 1" : "0.0897558 0.96410716 0.24988253 ", "Stain 2" : "DAB", "Values 2" : "0.1439689 0.56760013 0.81061894 ", "Stain 3" : "Residual", "Values 3" : "0.7746323 0.56158215 0.29080984 ", "Background" : " 255 255 255 "}');
runPlugin('qupath.imagej.detect.cells.WatershedCellDetection', '{"detectionImageBrightfield": "Hematoxylin OD",  "requestedPixelSizeMicrons": 0.3,  "backgroundRadiusMicrons": 8.0,  "medianRadiusMicrons": 0.0,  "sigmaMicrons": 1.2,  "minAreaMicrons": 10.0,  "maxAreaMicrons": 150.0,  "threshold": 0.05,  "maxBackground": 1.0,  "watershedPostProcess": true,  "excludeDAB": false,  "cellExpansionMicrons": 1.068249470402297,  "includeNuclei": false,  "smoothBoundaries": true,  "makeMeasurements": true}');
def detections2 = getCellObjects()
Type2 = getPathClass('Type2')
getCellObjects().each {
    it.setPathClass(Type2)
}
// Create new annotations with the same ROIs and classifications as the detections
def detections4 = getDetectionObjects()
def newAnnotations4 = detections4.collect {detection -> new PathAnnotationObject(detection.getROI(), detection.getPathClass())}
// Remove the detections, add the annotations
removeObjects(detections4, false)
addObjects(newAnnotations4)

fireHierarchyUpdate()

//FCC within cell annotations
selectObjects { it.getPathClass() != getPathClass("Region") }
setColorDeconvolutionStains('{"Name" : "Cyto", "Stain 1" : "Hematoxylin", "Values 1" : "0.7746323 0.56158215 0.29080984 ", "Stain 2" : "DAB", "Values 2" : "0.0897558 0.96410716 0.24988253 ", "Stain 3" : "Residual", "Values 3" : "0.1439689 0.56760013 0.81061894 ", "Background" : " 255 255 255 "}');
runPlugin('qupath.opencv.CellCountsCV', '{"stainChannel": "Hematoxylin",  "gaussianSigmaMicrons": 0.12,  "backgroundRadiusMicrons": 0.9,  "doDoG": true,  "threshold": 0.08,  "thresholdDAB": 0.2,  "detectionDiameter": 7.0}');
selectDetections();
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 0.01,  "region": "ROI",  "tileSizeMicrons": 25.0,  "colorOD": true,  "colorStain1": true,  "colorStain2": true,  "colorStain3": true,  "colorRed": false,  "colorGreen": false,  "colorBlue": false,  "colorHue": false,  "colorSaturation": false,  "colorBrightness": false,  "doMean": true,  "doStdDev": true,  "doMinMax": true,  "doMedian": true,  "doHaralick": false,  "haralickDistance": 1,  "haralickBins": 32}');
selectObjectsByMeasurement("ROI: 0.01 µm per pixel: OD Sum: Mean > 1")
clearSelectedObjects()
selectDetections();
runPlugin('qupath.lib.plugins.objects.ShapeFeaturesPlugin', '{"area": true,  "perimeter": true,  "circularity": false,  "useMicrons": true}');
//def detections3 = getDetectionObjects()

fireHierarchyUpdate()
resolveHierarchy()

Thanks in advance!
-PM

1 Like

For 1), you should be able to duplicate the annotation and apply cell detection to the duplicate without impacting the cells in the original. Potentially a bit harder to keep track of what is going on though…

1 Like

Hi @petebankhead, thank you for the quick reply, it worked really well to get all detection categories showing in the same image. However, the “resolve hierarchy” command appears to not be attributing any detections to cells, despite many cells fully containing said detections (instead separating everything and attributing it to the global annotation). Can detections be forced into being sub-objects of cells?

Alas, not via resolve hierarchy - it only assigns relationships between annotations, or between detections and annotations. Anything more complicated than that (i.e. assigning detections to be inside other detections) would need to be explicitly scripted.

The simple rules of the hierarchy intended to work efficiently in the majority of cases, but break down when pushed e.g. Expected behavior for detections in the hierarchy
So if you want to do anything less conventional it would all need to be scripted.

@petebankhead: Would that be based on modification of qupath.lib.objects.hierarchyPathObjectHierarchy? Do you know if any similar approach has been implemented (or attempted) before?

And, more importantly: do you think implementing that would be so computationally expensive and time-consuming as to make approach 2 more valid?

You may want to give Subcellular detections with varying degrees of splitting a try (split by shape, split by intensity, etc) if you want detections within other detections.

You can avoid object removal by saving the objects as well-


Here I have fast cell counts and regular cells in the same annotation by saving the original cell objects and then restoring them after the fast cell counts.
That does not make the fast cell counts child objects of the cell objects, however.

If you are less worried about the actual hierarchy (which I don’t know how to change), and would settle for a count, you could use something like this.

selectAnnotations()
runPlugin('qupath.imagej.detect.cells.WatershedCellDetection', '{"detectionImageBrightfield": "Hematoxylin OD",  "requestedPixelSizeMicrons": 0.5,  "backgroundRadiusMicrons": 8.0,  "medianRadiusMicrons": 0.0,  "sigmaMicrons": 1.5,  "minAreaMicrons": 10.0,  "maxAreaMicrons": 400.0,  "threshold": 0.1,  "maxBackground": 2.0,  "watershedPostProcess": true,  "excludeDAB": false,  "cellExpansionMicrons": 5.0,  "includeNuclei": true,  "smoothBoundaries": true,  "makeMeasurements": true}');
firstRun = getCellObjects()
getCellObjects().each{
    it.getMeasurementList().putMeasurement("count",0)
}
runPlugin('qupath.opencv.CellCountsCV', '{"stainChannel": "Hematoxylin",  "gaussianSigmaMicrons": 1.5,  "backgroundRadiusMicrons": 15.0,  "doDoG": true,  "threshold": 0.1,  "thresholdDAB": 0.2,  "detectionDiameter": 25.0}');
secondRun = getDetectionObjects()

addObjects(firstRun)
resolveHierarchy()
firstRun.each{ cell->
    secondRun.each{ fast->
        if(cell.getROI().contains(fast.getROI().getCentroidX(),fast.getROI().getCentroidY())){

            next = ++cell.getMeasurementList().getMeasurementValue("count")
            cell.getMeasurementList().putMeasurement("count",next)
            print cell.getMeasurementList().getMeasurementValue("count")
        }
    }    
}

It creates a variable called count within each cell (run first), and checks for the presence of N number of the fast cell counts which were run second. And this stumped me for like 10 minutes, but remember that fastCellCounts are NOT cells. So you have to use getDetectionObjects() to access the list.

1 Like

Hi @Research_Associate, thank you for the detailed answer and script!

I’m not especially concerned about preserving the hierarchy as long as I can extract the information I need, so I think an approach like the one you’ve scripted would work very well, especially as I’m fairly certain that I can still extract the angle of the intracellular particles through this approach.

However, when attempting the 3 detection runs together, I still quickly run out of resources. I see two potential ways out of this, either by removing all FCC detections whose centroids do not overlap with any cell objects (these effects happen experimentally due to cell lysis) or by writing all relevant cell/subcellular detection measurements to a table and removing the cells/detections as the for loops proceed (or both approaches at the same time).

I think the first approach remains possible without the hierarchical organization, as cell objects can still be used for creating an inverse annotation; through which I can then use your for loop to remove all detections with centroids included in that inverse annotation.

However, for implementing either approach, given the large number of detection objects, creating a list of objects to remove only adds to the problem. Is it possible to “delete as you go”? If so, would it be something akin to the following if that was implemented in your script?

//nested for, for, if
removeObject(fast,true)
}
}
removeObject(cell,true)
}
1 Like

You probably could delete as you go, but that wouldn’t change the maxiumum amount of resources used (after the third object detection), so I’m not sure it would have much impact. @petebankhead might have a better idea about methods to reduce overhead. My reaction is usually to use a bigger/better computer =/

1 Like

I’ve tried using an image processing server (256 GB RAM and 64 CPU cores) with the 2nd approach from my original post and that broke down just as quickly as my normal computer (16 GB, 6 cores), so I assumed I had to go at it from a design point of view. I’ll try this approach and see what happens, though!

Ah, if you had a ton of small annotations, that was a significant problem in M9, and might also be better in the new M10! It may not have been related to resources.

I suspect m10 should be better, but running scripts on an image that is open within a viewer can often be vastly more resource-intensive than running them on an image not open in a viewer (e.g. using Run for project).

The main cost is in terms of processing all the update events, which must be single-threaded in Java - and so improved hardware has only limited benefit.

The alternative is to delve more into QuPath’s internals, and avoid using the simple addObject / removeObject methods that are provided for convenience in scripts. If you add/remove objects directly as children of other objects, the update events aren’t triggered at all… which is definitely the preferred way to do things when a lot could be updated.

It doesn’t matter so much if the image is open in the viewer or not in this case. The caveat is that you must remember to call fireHierarchyUpdate() at the end to get everything aligned again (assuming nothing too outrageous has happened elsewhere in the script that QuPath fails to protect against, like objects being added to themselves and suchlike…).

And you might also need to call a few updates along the way, since update events are used for things like spatial caches within the hierarchy.

2 Likes

I apologize for what is likely to be a very basic question:

If QuPath scripts can be run without the viewer, is it safe to assume that addObjects is unnecessary after detecting the objects and attributing them to a variable?

i.e., in this example from @Research_Associate, can the marked lines be removed and still output the exact same information?

secondRun = getDetectionObjects()

**addObjects(firstRun)**
**resolveHierarchy()**
firstRun.each{ cell->
    secondRun.each{ fast->
        if(cell.getROI().contains(fast.getROI().getCentroidX(),fast.getROI().getCentroidY())){

            next = ++cell.getMeasurementList().getMeasurementValue("count")
            cell.getMeasurementList().putMeasurement("count",next)
            print cell.getMeasurementList().getMeasurementValue("count")
        }
    }    
}

The reasoning being that if visualization is the crux of most of my resource problems, I can easily forgo it as I’ve confirmed I’m detecting what I am aiming to detect!

I suspect it would work, but the main reason to add things back in and save them is to preserve the data integrity, in case you need to prove your results or generate sample images for a paper. Or to answer a reviewer question.
One important aspect of image analysis is verifying that your algorithm works across all images, so a different batch, stain, sample might not work as well, and your best way to determine that is to look at the resulting segmentation.

*May be slightly more sensitive to this as I am currently watching: https://twitter.com/NEUBIAS_COST/status/1253107624830066688
Excellent examples of many things that can go wrong in image analysis :slight_smile:

1 Like

I haven’t tried it but don’t see a problem with that.

Although in that case you may also be able to add the new/matched objects to the child list of the cell (after checking centroids) and fire an update at the end.

Can also absolutely comment out the print line. That might be spamming the log quite a bit, unnecessarily. That was mostly for me tracking issues I had writing the script in the first place.

@petebankhead, what command could I add to the script to do that? And would manual assignment for cell objects into the initial region annotation object also be faster than resolveHierarchy()?

@pedrolmoura something like

cell.addPathObject(fast)

As a quick trick, you can find out what methods are available on anything with describe, e.g.

selected = getSelectedObject()
println describe(selected)

but in general complex scripting is much easier with IntelliJ. I haven’t updated the documentation for that yet, but this gives the main points of setting it up: https://github.com/qupath/qupath/wiki/Advanced-scripting-with-IntelliJ

2 Likes

@Research_Associate: The script works beautifully, and now I’m getting quantification measures of a full cytospin in ~40 min. I think I can make it slightly faster through attribution of the object hierarchy (while inverting the order of the for loops to cycle through the list of particles before the list of cells), but even if not, this is already great nonetheless. Thank you both very much for your help!

2 Likes