Qupath 0.2 thresholder problem Zeiss.Z1 WSI

Hello together,

I try to analyse some wsi kidney biopsy images made with our Zeiss Axio Scan.Z1.
The goal is to detect and count the positive red stained cells inside and outside the glomerula.

Images are loaded fine by qupath. I set all to HE-DAB (it is AEC staining in red, but I think it should work?)

Now I try to create a thresholder, playing arround with a lot of settings in the tresholder, to detect the tissue for further analysis
Problem is: at the border of my pictures I always get some areas which I dont want to have.
Tried with pixel-classifier is more or less the same result.



Picture3 bigger ones

What can I try to get rid of those unwanted regions?
Is this a bug, or just a settings problem?

Best regards,

In the first image, the regions that are wrongly detected have black pixels. The thresholder doesn’t know that’s because they are unscanned, and so they are ‘correctly’ identified as being dark (or, more exactly in this case, having high hematoxylin values after stain separation… stain separation isn’t meaningful if the pixel values are zero, and can give weird results like this).

I’m not entirely sure while not all the dark regions are detected on the second image, I’d probably need to explore the image to understand what is happening there. But as far as I recall, some Zeiss images read with Bio-Formats can contain slightly different black (unscanned) regions at different magnification levels… and so the resolution at which you apply the threshold could be different from the resolution at which the image is currently viewed (the thumbnail of the third image hints this could be the case, since it doesn’t correspond exactly to the image as it seems in the viewer).

Either way, two options are:

  1. use the pixel classifier rather than the thresholder, trying to train the dark regions not to be identified
  2. create a broad annotation around the tissue and only use the thresholder to create objects inside that – thereby prescreening to remove the black bits. Your broad annotation could be created quickly with the polygon tool, or even by running a thresholder once to detect everything that isn’t black.

If do you use the thresholder to detect the ‘non-black’ pixels to define your initial region of interest, you might also want to use Objects → Annotations… → Expand annotations with a negative expansion value to shrink the annotations a little bit so that they are safely away from the troublesome parts.


Ok, thx for the really quick reply.

I tried with pixcel classifier. Only created two regions for 2 classes. Gewebe and other.

Looks good, but he black boxes are still there

If I go deeper in the picture I can see the edges of the black boxes are not black.

Maybe this is kind of my problem?
I tried different resolutions which does not change anything.

Best regards and thxso far,

Hi, you’d need to give the pixel classifier examples within the black regions to train it not to detect them. and possibly around the boundaries as well.

(In general, best to give it very small annotations - even just individual points - focussing on areas that it gets wrong, since that tends to make it easier to train).

Ok, so far I dont get it. I will make my annotations manually (arround 200 pictures)

It would be nice to make a thresholder with values between a lower and an upper threshold. that would be nice!

THX, so far

Basically, you can either:

  • create an annotation around the region that you want to threshold, excluding the background/black squares (which means that the thresholding won’t happen where the black squares are, since the black squares won’t be inside your annotation). Then use the Create thresholder command and choose to process ‘any annotations’.
  • Use the pixel classifier (Train pixel classifier command), by giving it examples of background AND black regions, so the automatically-generated classifier will know what to classify these regions as (or ignore them).

I guess you could use the thresholder then reclassify via script according to some criteria. But I would try either option above first.

I tried many different conditions, but there is nothing which is really satisfactory.
Maybe there is s.b. who can look at my slides?
I can upload some of them to google

Best regards,

1 Like

Yes, if you can provide an example of what you would like to achieve and some contextual information that would help us in giving you a proper answer! (If there are no privacy issues involved)

I would re-recommend one of Pete’s initial recommendations, creating two thresholders. The first one can ignore anything too dark to be tissue and is run on the whole image, and creates a single annotations (no split).
You might need to run this at the highest resolution, to prevent getting a line around the black areas. Definitely no smoothing.

Then selectAnnotations() if you did not include it as a setting in the first thresholder/
The second is the tissue thresholder that you have already created that works (probably with a minimum size threshold to remove background).

The script would end up looking something like

createAnnotationsFromPixelClassifier("notBlack", 0.0, 0.0, "SELECT_NEW")
background = getAnnotationObjects()
createAnnotationsFromPixelClassifier("Tissue", 5000.0, 0.0)

You would need to use your own createAnnotation lines of code/thresholders, of course!

I agree that would be helpful – although I’m not entirely sure how it should look in the user interface. Because thresholds link with classifications, it’s already quite complicated – I’ll give it some more thought though.

If the black regions didn’t occur at all, that would also fix the problem – since this stretches beyond QuPath, I’ve created an issue to discuss that:

1 Like

Hello together,

so finally I managed to get my tissue detected.

I use this script

createAnnotationsFromPixelClassifier("Bereich1", 0.0, 0.0)
createAnnotationsFromPixelClassifier("GewebeMod4G6B10", 100000.0, 100000.0, "DELETE_EXISTING", "SELECT_NEW")

My result:

Now I want to detect the red staining inside the green Annotation (Gewebe)
If I run positive cell detection with process all annotations I get all cells detected, but inside the yellow (Other) annotation and my purple and green annotations are gone.
Is this normal behavior and what can I do?

Best regards,

If you select a particular annotation to run cell detection, it should wipe out everything “inside” of that annotation when it begins to create cells. If you want cells in all annotations, run the cell detection first, then place the other annotations. If you only want the cells within a certain annotation, generate that annotation, then any other annotations/pixel classifiers.

If you are scripting, you could add all of the annotations (as you are doing now), then store certain classes of annotations, and re-add them later.

//store an annotation that is getting wiped out
anno  = getAnnotationObjects().findAll{it.getPathClass() == getPathClass("CHGA")}
//select parent annotation then run cell detection, which deletes "anno"
runPlugin('qupath.imagej.detect.cells.WatershedCellDetection', etc etc.

//get the deleted annotation back

@petebankhead had a method to pop objects out of the hierarchy so that you could take some of your annotations and place them on the first “level,” but I do not remember exactly how to code that (it.setLevel(1) does not work).

Hello together,

again to my first problem with the black areas in different magnifications.
I dont get it working with my next project.
I tried many different resolutions and thresholder vs pixelclassifier, but always get some black areas detected.

Maybe s.b. can give me a hand again and can look at my slides. It is mouse tissue, so no problem with privacy.
I have 3 of my slides on google drive:

Best regards,

I completely forgot about this until I found the file in my downloads folder, must have started the download and then headed off to work.

This won’t work for all images since it looks like there was tissue clipped off, but my steps were:

  1. Created an annotation to eliminate all of the black areas.
    0 Smoothing, full resolution.

  2. Expanded that annotation by negative X microns.

  3. Created a normal tissue thresholder using the eroded annotation as the selected annotation

The main problem is that the CZI format seems to be rather terrible about which pixels it sets to 0 at various pyramid levels. So you have to use the highest resolution and no smoothing when excluding all of the background crap. And even then you want to erode it a little bit.

Hopefully this helps someone if they are doing similar selections on CZI brightfield images.