Can Qupath be used to quantify non-cellular staining/features?

I have stained for collagen in my tissues with Sirius Red stain, I have attached one of the images below. Qupath seems to be adapted to quantify histological images where cells are also stained.

Is it possible to quantify image stianing when cells are not stained but tissue components such as collagen?

grafik

If by quantification you mean area, that is fairly straightforward using either the pixel classifier or simple thresholding tool in M10/M11, or the Positive Pixel detection in 0.1.2. I have never seen anyone trying to differentiate between the two areas you have indicated, and from the quality of image provided can’t see differences.

It may be that a pixel classifier could pick them up if there are texture differences, but I don’t think a simple thresholder would suffice to separate the two types of tissue.

@smcardle and @Zbigniew_Mikulski have more experience with PSR and might have other suggestions.

1 Like

Thanks a lot for your helpful response. I want to see the area of the image occupied by the fibrotic areas which I have enclosed in a selection with red borders. Fibrotic areas as well as connective tissues near airways have higher collagen content but the latter is normal. I am not able to set a colour threshold to separate the fibrotic and normal areas.

My objective is to define areas of fibrosis, have appttern and apply to all other images I have from the same staining experiment.

Yes, with that problem, you would need to try the pixel classifier, and make sure you try out many of the alternative “Features” possible, and run it at very high resolution. It will be… slow.

You might also try Weka pixel classifier through FIJI, or Ilastik.

1 Like

Thanks a lot.So I can run the pixel classifier on the fibrotic areas and then apply the criteria to all my other slides?

That should currently be possible through scripting.

1 Like

I’m going to jump in here and recommend superpixels. I love them for acellular regions like this. I find that they are better at “seeing” large scale features than the pixel classifier. @Research_Associate has a great description of them here.

The general workflow would be to 1) generate superpixels, 2) calculate features on those regions, including basic intensity features and haralick measurements and anything else you think might be helpful, 3) train an object classifier to identify fibrotic vs non-fibrotic regions, 4) classify all the objects in all images in your project, 5) (optional) use “tile classifications to annotations” to merge the detection objects into annotation objects (with a recent bug fix in m10), 6) measure the area of the classified objects.

2 Likes

Thank you so much. In step 5 of the workflow you have kindly proposed, I need to classify all objects in all images in my project. Can this be done automatically or I have to door manually?

This might help as well.

I’m not sure if this was the bug fix @smcardle was mentioning, but there may be some issues with the old detection classifier in M11.

After you have trained and saved a classifier, you can write a script to apply it. In m11, using the new object classifier, the command is simply:

runObjectClassifier(“NameOfClassifier”);

Once you have a script to do the whole workflow (generating super pixels through classification), you can use “Run for Project” to do all the images.

Note: I highly recommend training your classifier on a combined training image (Classify > Training Images > Create Combined Training Image) for better accuracy across a project.

2 Likes

Thank you very mych. But when I do Classify > Training Images > Create Combined Training Image I get the error message : “No suitable annotations found in the current project”.

Create combined training image requires annotations of the class chosen to exist throughout the project. Have you created annotations with the same assigned class in several of your images?

One further thing to note when creating those… it will only collect saved annotations. Drawing them is not sufficient. I have occasionally omitted the last image in a set when using that feature as I forget to save the current data set before Creating the combined training image.

Please which steps do I need to “train an object classifier to identify fibrotic vs non-fibrotic regions”?

This is how I tried to do it:

  • to set annotations on one loaded images:

    • polygon > annotations > right click > set properties > name = “fibrotic”

    • polygon > annotations > right click > set properties > name = “non-fibrotic”

    I defined on the same slide image 5 fibrotic and 5 non-fibrotic zones

  • To train an object classifier to identify fibrotic vs non-fibrotic regions

    • Classify > Object classification > Train object classifier > save & apply

    • Then I get the error: java.lang.NullPointerException

1 Like

You may want to look into the classification steps mentioned in general to understand how QuPath works. You simply changed the name of the annotations, not their class. The name is only there for your benefit, the program doesn’t really make use of it.
https://qupath.readthedocs.io/en/latest/docs/tutorials/cell_classification.html#annotate-regions-containing-different-cell-types

1 Like

There’s also step-by-step instructions for building an object classifier on Youtube.

3 Likes

Thank you so much for your kind help. I wish to ask for a little clarification. So if I got you well, is this how the work flow from scratch will be:?

  1. Create project

  2. Add images

  3. build an object classifier

  4. generate superpixels like this:
    Analyze->Region identification->Tiles &
    superpixels->SLIC superpixel
    segmentation.

  5. calculate features on those regions, including basic intensity features and haralick measurements and anything else you think might be helpful,

  6. train an object classifier to identify fibrotic vs non-fibrotic regions,

  7. classify all the objects in all images in your project,

  8. (optional) use “tile classifications to annotations” to merge the detection objects into annotation objects (with a recent bug fix in m10),

  9. measure the area of the classified objects.

Thank you in advance

  1. isn’t correct as you wouldn’t have any objects to build the classifier with at that point. Also it is redundant with 6.
1 Like