Liver Steatohepatitis HE

Hello!

Novice Qupath user with limited experience in histological analysis programs requesting some feedback on my analysis methodology.

I have done a H&E as well as ORO staining on Apoe-/- murine liver sections, and while I could manage the ORO sections quite well the HE stain seems more tricky.

For ORO a “Simple Tissue Detection” followed by a pixel classifier and export annotations script did the trick, although scripting the pixel classifier would be lovely (future version?).

But for an HE image I would like to quantify structures rather than color (in a sense), and since I am not a pathologist I am not qualified to train a classifier to the point of scoring steatosis, so I thought of a simpler solution.

I would like to classify white spaces smaller than a certain size. This is a very rudimentary method, but would suffice to at least quantify microsteatosis.

Any ideas on how to do this? I tried playing with the create objects function in the pixel classifier but it quickly ate up all my RAM when applied on whole slide images. The dream would be to only classify “white spaces” smaller than a given size, so as to exclude vessels and bile ducts.

Also, I’m having trouble applying a pixel classifier to only the annotation, it spreads across the whole slide even when annotation only is specified, and when using a lower pixel size this eats up quite a bit of processing power.

Any feedback is greatly appreciated.

Scripting the pixel classifier can be done now, thanks to a post from @dstevens here:


More on Simple Thresholding here:

Classifying white spaces within the simple tissue annotation is covered briefly here:

You can modify a script from the Remove Objects list of scripts to find anything above or below a certain size, and delete those objects.
https://gist.github.com/Svidro/33558ac3bd9f68a5ec2428f74550831f

Take a look, and I’m sure you will have more questions.

For high resolution whole slide images, it is entirely possible you need a better computer or more RAM. If you are using M6, all processes should be smoother than earlier versions, but what you are asking the program to do is fairly intensive. Especially with many small objects. That is pretty much the worst.

As posted in one of the links above, when you run a pixel classifier, it runs on the selected annotation or everything in the image. I think there is now a warning in M6 that alerts you that you have chosen the whole image, though it might just be an alert if you have an annotation selected that you did not intend.

2 Likes

Automated analysis of a NAFLD/NASH model has been described with Qupath in a poster at a conference last year: https://www.eurekalert.org/pub_releases/2018-04/eaft-aao041318.php . I tried to contact the authors to get more details on the method but unfortunately was not successful. I have seen this done on other platforms (e.g. Aiforia, https://www.aiforia.com/) in which the macro and microvesicular steatosis, fibrosis, and biliary hyperplasia has been quantified successfully with classifiers.

When I tried doing this with Qupath the pixel classifier did not yet exist or had just been released, so I could not accomplish much. I want to try again and I’m optimistic that it should be possible with the recent improvements in the pixel classifier. I am a pathologist and I until now I have been manually scoring NAFLD models with published scoring methods. It works but clearly has limitations that we are looking to overcome with image analysis.

But based on your image, I would recommend as a first step that you start by improving the quality of your sections. This looks like a frozen section? Of course frozen is needed for the oil red O, but for the H&E I would recommend FFPE. We would not manually interpret frozen H&E slides for NASH, because the quality of frozen is never sufficient, and for image analysis it would be even more problematic. High quality FFPE sections will make your life much easier at the image analysis step. I have attached an example of an FFPE H&E section of a NAFLD model, and you can see how the histopathologic features are much easier to identify to the eye, and it would also be for a classifier.

3 Likes

Based on the images, the quantification of the, ah, lipid circles would be significantly impacted by the freezing of the section. Heh. Very significantly. Just goes to show that everything starts with sample prep.

2 Likes

Wow, thank you both for such wonderful replies.

Great stuff on using the simple threshold tool and remove objects, I think that should do the trick. Will try my best to make it work given my limited technical skills, but although a daunting task, one I believe I can overcome.

But I cannot get the pixel classifier script to work outside of creating annotations, how would one modify it to apply a pixel classifier and export the measurement of an already existing annotation?

1 Like

Thank you for the reply.

I saw that poster while looking for clues as to how one would go about using Qupath to quantify NASH, and though I would have loved to learn how they did it I did not go as far as to contact the authors. An automated solution like the one described in your link would be very convenient for us who lack the expertise to score such pictures manually.

And I do agree that cryosections of the sort I have here are inadequate for identification of morphological structures I hoped that a rudimentary quantification of steatosis could be achieved, unfortunately we only have cryosections from this particular experiment.

We did however redo this experiment and luckily isolated separate liver lobes for both cryo and FFPE sections, hopefully it will pay off.

Not entirely sure what you mean, but if you mean you want a percentage, you would probably create the parent annotation, classify that as Tissue, and then divide the classified area by the parent in the exported data file. If you wanted to dig more into scripting, though, there is an entire section on making your own measurements here:
https://gist.github.com/Svidro/68dd668af64ad91b2f76022015dd8a45

2 Likes

I’m sorry for not expressing myself clearly enough, the issue I have is that while the script you linked to indeed utilizes the pixel classifier, it applies the classifier and creates objects with the same command.

//Convert pixel classifier to annotations
//Not sure if the smallest annotations/holes is in pixels or microns....
PixelClassifierTools.createAnnotationsFromPixelClassifier(imageData, classifier, annotations, 500, 500, false, true)

I would like to use the pixel classifier simply to measure what percentage of a pre-defined annotation is classified by as “x”, and subsequently export this data along with other data in the annotation.

I know how to script the tissue detection and measurement export, but I haven’t managed to find out how to incorporate the pixel classifier to allow for batch processing.

1 Like

Oh, I’m afraid I don’t know where or if those values are stored. If you could find them, I suppose you might be able to create a script where the minimum size was much more than any possible annotation to preclude the creation of objects.

The only way to batch it for now, that I know of, is to use that script to generate the objects and then perform the division to get the percentage. At least, that is how I’ve done it on several projects so far.

@petebankhead might know of another way, if there is one.

@pallarnte can you already get the measurements you want by running the pixel classifier interactively? It should measure areas within any annotations - although the measurements will only be available whenever all the tiles corresponding to the annotation have been classified.

These measurements might not persist after the classifier is closed. Making the pixel classifier properly scriptable is on my todo list, I’m afraid I just have no time at all to work on it at the moment : (

Hi, if I understand correctly you have a predefined annotation and you would simply like to have measurements for what percentage of that annotation is covered by each class given by a pixel classifier. I think something like this may work for that purpose, however right now it only works for one annotation in the image!

import qupath.lib.gui.ml.PixelClassifierTools

//Need a full image annotation to begin with
def annotations = getAnnotationObjects()

//Define pixel classifier
def project = getProject()
def classifier = project.getPixelClassifiers().get('Your-Pixel-Classifier-Name')

//Define image data
def imageData = getCurrentImageData()

//Convert pixel classifier to annotations
//Smallest annotations/holes is in pixels
PixelClassifierTools.createAnnotationsFromPixelClassifier(imageData, classifier, annotations, 500, 500, false, true)

//Remove our starting annotation
removeObjects(annotations, true)

def newAnnotations = getAnnotationObjects()

def count = 0

String[] annNames = new String[count + 1]
double[] annSizes = new double[count + 1]

for(newAnn in newAnnotations){

    annNames = Arrays.copyOf(annNames, count + 1)
    annNames[count] = newAnn.getPathClass().getName()

    annSizes = Arrays.copyOf(annSizes, count + 1)
    annSizes[count] = newAnn.getROI().getArea()

    count++
}

removeObjects(newAnnotations, true)
addObjects(annotations)
def totalSize = annotations[0].getROI().getArea()

for(int i = 0; i < count; i++){
    annotations[0].getMeasurementList().putMeasurement(annNames[i] + " %:", 100*annSizes[i]/totalSize)
}

This script should run the pixel classifier within your predefined anotation, measure the area of each annotation created, then delete those annotations, add the old one back and add measurements indicating the percentages.

2 Likes

Nice! And yeah, you would probably want to not delete the annotations if there were multiple annotations, just cycle through their child objects to apply the measurement to the parent. I would probably use createDetectionsFromPixelClassifier() in that case, to make it easier to distinguish and remove the child objects.

Another way I just thought of was running the simple tissue detection twice. Once with holes (max fill area set very low), once without holes. If you store the annotation after the first run, you could subtract the areas to get the whitespace. The percentage follows the way you already did it.