Quantifying H&E parameters



Do you have written instructions or a video tutorial on how to use QuPath to create a classifier for H&E that can detect tumor vs necrosis and then subsequently quantify the area of each. Any help would be appreciated.

Thank you.



If you are using 0.2.0m2, then it sounds like you want the pixel classifier video included in the new release description here. Unfortunately there is no high throughput version of that just yet, but hopefully soon!

Otherwise you could try classifying SLICs or Positive Pixel detection (a couple of links in this one link), I tend to prefer SLICs for that but I am not really sure how consistent your images are, and what necrosis looks like can vary a bit depending on the sample (more or less hematoxylin spotting from cells still in the mix) making it somewhat challenging to classify at times.



Just a comment, I would rethink the idea behind such question. H&E is not only non-stoichiometric, but is not even a stain specific for “tumour” and “necrosis” regions. Certainly that is not resolvable via pixel classification as the features of viable cells and necrotic cells are not “pixel level”.



Hi @gabriel, if I understand this comment correctly it is about using the color of individual pixels alone to make the classification decision. Both the SLIC and ‘pixel classifier’ methods in QuPath can use local textures (either Haralick textures for SLIC, or filter responses for the pixel classifier) - so I think these could be suitable, no?

Both the suggested methods use trainable machine learning, although it’s hard to predict in advance whether the available features are likely to be sufficient for a good classification - especially without seeing example images. Would there be an alternative approach / method of staining that would be more appropriate / make the image analysis task easier?



Hi, maybe I am misunderstanding the question but the OP request seems too broad and ambitious. First problem, there are various types of necrosis: liquefaction, coagulative, caseating, gangrenous, haemorrhagic to name a few. They all can look different. It is difficult to imagine that generalising the problem is achievable just using H&E and that would be applicable to an arbitrary sample (necrosis in liver vs muscle vs thyroid vs brain?).
Second issue, is the tumour vs necrosis problem. Are those the only two possibilities in the sample? What about normal cells of various types, and processes of various type too: inflammation, degenerative changes, ageing, non-tumour necrosis vs tumour necrosis? That is not trivial at all.

To answer the question on alternative approaches… If the problem is about finding out “necrosis in a tumour” (note that this is quite different than finding the two classes in a larger sample where there might be many other tissues) first I would try to find out if there is a reliable marker for the tumour, only then investigate whether those markers are preserved in the necrotic regions (some do some do not, there are several papers on this for different tumour types) and train a classifier on that.
Whether pixel classifiers would suffice for this, I do not know. There has been a lot on deep learning with some impressive results in the last few years, so that may be another venue to explore. However there have been claims that (at least in some problems) the classifiers are shallow and can be fooled easily (think of the various artefacts that we have to put up in histological preparations). Also some people do not seem particularly happy with the difficulties in interpreting what the classifiers are doing (the old black box problem). That, however is a different (more philosophical?) problem, do we really know what is going on in the head of an expert observer?
Sorry for the long post.

1 Like


Thank you @petebankhead and @gabriel - I have tested out some of the images with the pixel classifier and it seems to work fairly well. Obviously, it’s a pretty supervised approach using my annotations as a guide to train the classifier. Any instructions on what the various parameters mean for the pixel classifier (Gaussian fit, pixel resolution) and how to best adjust them based on your particular project? Or is it mostly trial and error for the best fit?



I would recommend looking at the results using the Show button, next to the Features list. It will generate an image for each combination of feature and channel in the current pixel classifier viewer. Some of them are better for picking up edges, while greater amounts of smoothing help prevent tiny areas of misclassified holes. On the other hand, more smoothing sometimes prevents you from picking up small pockets of… something if you are interested in small pockets.

I would pick one color channel, select a bunch of features, and take a look. Once you get an idea, it’s mostly trial and error after that (is sigma of 2 or 4 better? hmm), but it helps to know what sorts of features you are looking for with each selection.