Qupath 0.2 thresholder problem Zeiss.Z1 WSI

Hello together,

I try to analyse some wsi kidney biopsy images made with our Zeiss Axio Scan.Z1.
The goal is to detect and count the positive red stained cells inside and outside the glomerula.

Images are loaded fine by qupath. I set all to HE-DAB (it is AEC staining in red, but I think it should work?)

Now I try to create a thresholder, playing arround with a lot of settings in the tresholder, to detect the tissue for further analysis
Problem is: at the border of my pictures I always get some areas which I dont want to have.
Tried with pixel-classifier is more or less the same result.

Picture1

Picture2

Picture3 bigger ones

What can I try to get rid of those unwanted regions?
Is this a bug, or just a settings problem?

Best regards,
Stefan

In the first image, the regions that are wrongly detected have black pixels. The thresholder doesn’t know that’s because they are unscanned, and so they are ‘correctly’ identified as being dark (or, more exactly in this case, having high hematoxylin values after stain separation… stain separation isn’t meaningful if the pixel values are zero, and can give weird results like this).

I’m not entirely sure while not all the dark regions are detected on the second image, I’d probably need to explore the image to understand what is happening there. But as far as I recall, some Zeiss images read with Bio-Formats can contain slightly different black (unscanned) regions at different magnification levels… and so the resolution at which you apply the threshold could be different from the resolution at which the image is currently viewed (the thumbnail of the third image hints this could be the case, since it doesn’t correspond exactly to the image as it seems in the viewer).

Either way, two options are:

  1. use the pixel classifier rather than the thresholder, trying to train the dark regions not to be identified
  2. create a broad annotation around the tissue and only use the thresholder to create objects inside that – thereby prescreening to remove the black bits. Your broad annotation could be created quickly with the polygon tool, or even by running a thresholder once to detect everything that isn’t black.

If do you use the thresholder to detect the ‘non-black’ pixels to define your initial region of interest, you might also want to use Objects → Annotations… → Expand annotations with a negative expansion value to shrink the annotations a little bit so that they are safely away from the troublesome parts.

2 Likes

Ok, thx for the really quick reply.

I tried with pixcel classifier. Only created two regions for 2 classes. Gewebe and other.

Looks good, but he black boxes are still there

If I go deeper in the picture I can see the edges of the black boxes are not black.

Maybe this is kind of my problem?
I tried different resolutions which does not change anything.

Best regards and thxso far,
Stefan

Hi, you’d need to give the pixel classifier examples within the black regions to train it not to detect them. and possibly around the boundaries as well.

(In general, best to give it very small annotations - even just individual points - focussing on areas that it gets wrong, since that tends to make it easier to train).

Ok, so far I dont get it. I will make my annotations manually (arround 200 pictures)

It would be nice to make a thresholder with values between a lower and an upper threshold. that would be nice!

THX, so far
Stefan

Basically, you can either:

  • create an annotation around the region that you want to threshold, excluding the background/black squares (which means that the thresholding won’t happen where the black squares are, since the black squares won’t be inside your annotation). Then use the Create thresholder command and choose to process ‘any annotations’.
  • Use the pixel classifier (Train pixel classifier command), by giving it examples of background AND black regions, so the automatically-generated classifier will know what to classify these regions as (or ignore them).

I guess you could use the thresholder then reclassify via script according to some criteria. But I would try either option above first.

I tried many different conditions, but there is nothing which is really satisfactory.
Maybe there is s.b. who can look at my slides?
I can upload some of them to google

Best regards,
Stefan

1 Like

Yes, if you can provide an example of what you would like to achieve and some contextual information that would help us in giving you a proper answer! (If there are no privacy issues involved)

I would re-recommend one of Pete’s initial recommendations, creating two thresholders. The first one can ignore anything too dark to be tissue and is run on the whole image, and creates a single annotations (no split).
image
You might need to run this at the highest resolution, to prevent getting a line around the black areas. Definitely no smoothing.

Then selectAnnotations() if you did not include it as a setting in the first thresholder/
The second is the tissue thresholder that you have already created that works (probably with a minimum size threshold to remove background).

The script would end up looking something like

createAnnotationsFromPixelClassifier("notBlack", 0.0, 0.0, "SELECT_NEW")
background = getAnnotationObjects()
//selectAnnotations();
createAnnotationsFromPixelClassifier("Tissue", 5000.0, 0.0)
removeObjects(background,true)

You would need to use your own createAnnotation lines of code/thresholders, of course!

I agree that would be helpful – although I’m not entirely sure how it should look in the user interface. Because thresholds link with classifications, it’s already quite complicated – I’ll give it some more thought though.

If the black regions didn’t occur at all, that would also fix the problem – since this stretches beyond QuPath, I’ve created an issue to discuss that:

1 Like