Applying pixel classification Thresholder on detected cells

Hi all,

I try to quantify a chromogenic multiplex staining CD4-FoxP3 (Hematox, Purple membrane and DAB nucleus). I already did it with Fiji, but now I would like to use QuPath.
I already set up the stain vectors, I detected cells and managed to detect the purple CD4+ cells. But I’m stuck with the (CD4+)/FoxP3+ cells. I tried the single measurement classifier, the train object classifier, I tried to play with the deconvolution vectors, but I didn’t managed to get a good result.
The only good thing I get is when I use the pixel classifification/Create thresholder and use the Classify option. My FoxP3+ cells are perfectly classified.
But unfortunatelly, the classifyDetectionsByCentroid function erases all previously classifications made. Is there a trick to avoid this (ie to run this function only on selected detections ?) ?

Thanks a lot


If that works, it seems odd that standard single measurement classifiers do not work, since that is what they basically are. Hard to say definitively without more information though.
You could store the old classifications to a variable and restore them. If you have two classifiers that correctly apply the classes independently, setting a variable to 0 or 1 for the first classifier run should allow you to generate single and double positives in a third step.

Hi @Research_Associate ,
I think it is because the cell/nucleus detection is not perfect, since the purple/DAB masks the hematoxylin signal. So measurements inside the nucleus fail.

I understand the global idea, but I don’t see how (in practice) you can use this variable to fuse (or derivate) the classes.


Logic loops.
Assuming cells are “purple” or null
and the measurement for DAB was set to 1 if the cell was DAB positive.

if(measurement(it, "DAB") == 1 && it.getPathClass() == getPathClass("purple")){


In general, mixing two dark stains, or any stain and DAB, is not a great idea (once something is dark enough, you lose information about the stains that made up those pixels) - but if you can make it work, more power to you.

Thanks, it worked perfectly ! (just one missing ) between (“purple”) and { )

I have one more question about classifyDetectionsByCentroid function.
If I understand well, it creates objects (based on the gathering of connected pixels which are OK with the classifier properties) and if the centroid of these objects are inside one existing cell, the function assignes the classifier’s class to this cell.
If this is correct, I don’t understand why the cell with a green point inside on top-left of the image below is not assigned to the blue class.


1 Like

Ah, thanks, edited for anyone else that comes across this post.

As far as the pixel classifier, it should not create any objects, but use the mask of the classifier - and in that case every detection is represented by it’s centroid, not the presence or absence of the classifier.
The cell circled in the upper left would not be classified using the pixel classifier since it’s centroid would be to the left of the green area.
Note that centroids are weighted XY - based on shape. Not the center of the bounding box of the cell.
That means if you added more blur, you might pick up that cell.

Alternatively, you may want to actually create objects, and then use more complicated coding to check whether the shapes overlap by X percentage with the cell. Or some other sort of area based logic.

No need for anything too complicated – use the Measure button with the pixel classifier instead instead to add overlap measurements to all your detections. Then use those measurements with a conventional object classifier.

If you have two or more non-ignored classes, the measurements will include the proportion of overlap for each. Otherwise it will just give the classified area.

Note that there can still be some small surprises around boundaries because the measurements are made at the resolution at which the image is classified - so ‘fractional pixels’ that overlap another class might be visible in the full image but not contribute to the measurement (this shouldn’t be an issue if you classify at the full resolution… although it’s usually preferable to use a lower resolution for performance).

This is the intended way to use the pixel classifier with more sophisticated object classification based on staining in only part of a cell compartment.


Ah, that should be much faster than the subcellular detections. I had not tried selecting detections and running the pixel classifier measurement on those yet. The last time I did something similar, I created detection objects within cells and checked the size of those objects! That was… not optimal in terms of speed.

This seems promising, but is this measurement scriptable ? When I press the mesure button, nothing is recorded in the workflow.


You should find it scriptable and in the workflow if

  • you’re using the latest release
  • you’re using a project
  • you’ve saved your classifier

The code looks like this:

addPixelClassifierMeasurements("Classifier name", "Measurement name)

Side note, you will want to do something like selectDetections() first if you want the measurement to be on your cells, I think.

Thanks @petebankhead and @Research_Associate .
As usual, you rock !

1 Like