Counting Cytoplasm + Nuclei labelled in different channels in QuPath

Hi All,

I have tissue labelled in one channel with a cytoplasmic marker (retrograde tracer, CTb), and in another channel a nuclei marker (Fos). The aim is to count each individually, but also to count the CTb positive cells that are also Fos positive. Using QuPath, we have upwards of 100 annotations in a project, so the idea is to run through each annotation and determine these values (Total CTb, Total Fos, Total CTb+Fos)

Below I have used Cell detection in QuPath to identify the positive cells in the CTb channel:

I have tried to combine this with counting the labeled nuclei using threshold settings in ImageJ, but I find that the Cell detection also works much better for identifying Fos nuclei than thresholds. As such, I would like to count both channels separately (using different cell detection parameters) and then determine the overlap.

The approach I have settled on at the moment:
(please let me know if you think this could be more easily done!)

The first step is to export to ImageJ the CTb channel with the detected cells:


Then send the detected cells to the ROI manager, fill them, then make a binary mask of these cells:

This approach is then repeated in the Fos channel:

Then using image calculator the two are multiplied to give the nuclei that are contained within the CTb cells:


And these nuclei could be easily counted in imageJ using analyse particles.

I am struggling to write a script for this approach, which is why I am reaching out for advice here. I imagine the script looking like this:

In each annotation of a project

  • Use cell detection in the CTb channel (save Annotation measurements)
  • Export to imageJ and create the binary mask
  • Use cell detection in the Fos channel (save Annotation measurements)
  • Export to imageJ and create the binary mask
  • Image calculator (multiply) the masks and count the remaining nuclei (analyze particles)

Does this approach seem ‘scriptable’? Is there a better way!?
Any advice you could offer is appreciated!
My main problem at the moment is unfamiliarity with scripting, so if you could point me in the right direction for info I would appreciate that.

If your current issue is only the double positive areas, would it make more sense to start from the channel with less background, and then calculate the intensity of the CTb within each FoS area to determine which of those are double positive? If you would say there are 2-4 double positive cells in the FoV you first posted, that seems like it might work fairly cleanly, entirely within QuPath.

You mention thresholding methods not working very well, but not how or why they are failing. The first image provided looks like it is generating quite a few false positive CTb areas


due to the background already, so maybe some background subtraction in FIJI prior to analysis would be helpful?

Hard to say too much about what is possible without a working example image, but if you have gotten your process to work, you may want to create the ImageJ macro, and then run that through QuPath using the Macro runner https://qupath.readthedocs.io/en/latest/docs/advanced/imagej.html#running-macros or the scripts linked by PM for the last question and the class information.
https://github.com/qupath/qupath/blob/a03756328188999c0b7f12c290cda0589c50bd4b/qupath-extension-processing/src/main/java/qupath/imagej/gui/ImageJMacroRunner.java#L89

Again, I haven’t really tested anything through ImageJ in 0.2.0, so I don’t know what you might run into. There are multiple scripts that have involved sending regions to ImageJ, and then returning ROIs to QuPath to be counted and exported.

You might also try varying Sara’s (@smcardle) script here to find the intersection of the two combined geometries of objects (rather than Union all of them together, intersect the two sets after combining each individual group of cells).
https://gist.github.com/Svidro/5829ba53f927e79bb6e370a6a6747cfd#file-merge-touching-detections-groovy
That would also avoid having to use ImageJ at all, using JTS instead.
https://qupath.readthedocs.io/en/latest/docs/scripting/overview.html?highlight=JTS#working-with-java-topology-suite

I do appreciate the comment, and agree that the FoV shared was not an ideal example. I was just trying to illustrate the general aim of my analysis question. I am confident that I can adjust the parameters in cell detection to avoid these issues in the future. But the main question remains, whether the approach I have in mind is possible at all.

Below I have a script that is getting me in the direction I had hoped.
It is running the analysis on each Annotation, and output that analysis to ImageJ before moving to the next. (If you have any general advice on this I am happy to hear it, its my first effort!)

import static qupath.lib.gui.scripting.QPEx.*
import qupath.lib.objects.PathAnnotationObject
import qupath.lib.regions.*
import qupath.imagej.gui.IJExtension
IJExtension.getImageJInstance()
def path = buildFilePath(PROJECT_BASE_DIR, 'annotation results')
mkdirs(path)

selectAnnotations()
runPlugin('qupath.imagej.detect.cells.WatershedCellDetection', '{"detectionImage": "FITC",  "requestedPixelSizeMicrons": 0.1,  "backgroundRadiusMicrons": 12.0,  "medianRadiusMicrons": 3.0,  "sigmaMicrons": 4.0,  "minAreaMicrons": 30.0,  "maxAreaMicrons": 500.0,  "threshold": 10.0,  "watershedPostProcess": true,  "cellExpansionMicrons": 2.2163,  "includeNuclei": false,  "smoothBoundaries": true,  "makeMeasurements": true}');

def name1 = PathAnnotationObject.getName() + '_CTb.txt'
path1 = buildFilePath(path, name1)
saveAnnotationMeasurements(path1)

def annotations = getAnnotationObjects()
for (annotation in annotations) {
    def selectedObject = selectedROI
    boolean setROI = false
    def roi = annotation.getROI()
    def server = getCurrentServer()
    double downsample = 1.0
    def request = RegionRequest.createInstance(server.getPath(), downsample, roi)
    def imp = IJExtension.extractROIWithOverlay(getCurrentServer(),selectedObject,getCurrentHierarchy(),request,setROI,getCurrentViewer().getOverlayOptions()).getImage()
    imp.show()
    imp.setTitle("CTb_" + annotation.getName())
}
selectAnnotations()
runPlugin('qupath.imagej.detect.cells.WatershedCellDetection', '{"detectionImage": "Texas Red",  "requestedPixelSizeMicrons": 0.1,  "backgroundRadiusMicrons": 5.0,  "medianRadiusMicrons": 1.0,  "sigmaMicrons": 1.5,  "minAreaMicrons": 20.0,  "maxAreaMicrons": 150.0,  "threshold": 15.0,  "watershedPostProcess": false,  "cellExpansionMicrons": 2.127659574468085,  "includeNuclei": false,  "smoothBoundaries": true,  "makeMeasurements": true}');

def name2 = PathAnnotationObject.getName() + '_Fos.txt'
path2 = buildFilePath(path, name2)
saveAnnotationMeasurements(path2)

for (annotation in annotations) {
    def selectedObject = getSelectedObject()
    def roi = annotation.getROI()
    def server = getCurrentServer()
    double downsample = 1.0
    def request = RegionRequest.createInstance(server.getPath(), downsample, roi)
    boolean setROI = true
    def imp = IJExtension.extractROIWithOverlay(getCurrentServer(),selectedObject,getCurrentHierarchy(),request,setROI,getCurrentViewer().getOverlayOptions()).getImage()
    imp.show()
    imp.setTitle("Fos_" + annotation.getName())
}
print 'Results exported to ' + path1

However, the next problem is the ROI boundary is now added to the image as an annotation, in the same manner as the cell detections. Thus when I run the commands ‘add to ROI manager’ and ‘Fill’, the whole ROI is filled. I assume this is an easy fix but I cant figure it out…:

I haven’t played around with it much, but if you are running this on a particular image, one at a time, could you turn off the Annotation overlay before sending to ImageJ? It looks like from the script it is taking the current Viewer options.
image

Other than that, I would still keep everything in QuPath, but that is because I am more comfortable there than with ImageJ. No other particular ImageJ related ideas.

One thing that might be useful to consider, on the intersection (or even not) analysis method, is that you can keep all of the cells at once.

def CTb_cells = getDetectionObjects()
cells.each{it.setPathClass(it.getPathClass("CTb"))}

prior to the second cell detection, then add
addObjects(CTB_cells)
after the second set of cells has been created.

At that point, you should have both sets of cells visible, inspect-able, and select-able. The CTb cells would also have a class, allowing you to distinguish them from the Fos cells.
In fact, you might be able to do something as simple as check whether each CTb cell contains a Fos cell centroid, though I’d need a minute to figure out the exact code.

Yep, that wasn’t too bad. Note that this takes any CTb class cells and reassigns them the class CTb_Fos. So your total number of CTb cells is the CTb class + CTb_Fos.

setImageType('FLUORESCENCE');
clearAllObjects()
createSelectAllObject(true);
runPlugin('qupath.imagej.detect.cells.WatershedCellDetection', '{"detectionImage": "DAPI",  "requestedPixelSizeMicrons": 0.5,  "backgroundRadiusMicrons": 0.0,  "medianRadiusMicrons": 0.0,  "sigmaMicrons": 1.5,  "minAreaMicrons": 10.0,  "maxAreaMicrons": 400.0,  "threshold": 2.0,  "watershedPostProcess": true,  "cellExpansionMicrons": 0.8037823650008401,  "includeNuclei": true,  "smoothBoundaries": true,  "makeMeasurements": true}');
def CTb_cells = getDetectionObjects()
CTb_cells.each{it.setPathClass(getPathClass("CTb"))}

runPlugin('qupath.imagej.detect.cells.WatershedCellDetection', '{"detectionImage": "CD8 (Opal 540)",  "requestedPixelSizeMicrons": 0.5,  "backgroundRadiusMicrons": 0.0,  "medianRadiusMicrons": 0.0,  "sigmaMicrons": 3.0,  "minAreaMicrons": 10.0,  "maxAreaMicrons": 400.0,  "threshold": 3.0,  "watershedPostProcess": true,  "cellExpansionMicrons": 0.8037823650008401,  "includeNuclei": true,  "smoothBoundaries": true,  "makeMeasurements": true}');
def Fos_cells = getDetectionObjects()
Fos_cells.each{it.setPathClass(getPathClass("Fos"))}
addObjects(CTb_cells)

CTb_cells.each{ c->
    Fos_cells.each{
        if (c.getROI().contains(it.getROI().getCentroidX(), it.getROI().getCentroidY())){c.setPathClass(getPathClass("CTb_Fos"))}
    }
}

Not too long and I think it does what you were looking for without dipping into ImageJ.

You will, of course, need to replace the cell detections!