Analyzing Masson Trichrome images

Sample image and/or code

I have hundreds of MT-stained slides, where blue-colored material represents collagen. The more collagen fibers are deposited, the darker blue color gets. I graded the degree of collagen deposition from 1+ to 3+ in the image. Is it possible to write a script that classifies pixels in a ROI into 0 (not blue at all), 1+, 2+ and 3+? I want to put all the images I have into a project, designate ROIs for each image, and run the script for project. I’m familiar with writing scripts but not familiar with the pixel classification functionality… so i don’t know where to start. any kind of hints or reference documents would be appreciated!

  • Upload an original image file here directly or share via a link to a file-sharing site (such as Dropbox) – (make sure however that you are allowed to share the image data publicly under the conditions of this forum).
  • Share a minimal working example of your macro code.


  • What is the image about? Provide some background and/or a description of the image. Try to avoid field-specific “jargon”.

Analysis goals

  • What information are you interested in getting from this image?


  • What stops you from proceeding?
  • What have you tried already?
  • Have you found any related forum topics? If so, cross-link them.
  • What software packages and/or plugins have you tried?

Others may have cleaner ideas, but my first thought is:

Based on how strong your general blue staining is outside of collagen regions, and edge artifacts with the staining on the right, I would recommend a four step approach.

1 Pixel classifier to determine which areas you want to measure, and which are normal tissue or background/empty. The end result of this step is an “everything collagen I want to measure” annotation.
2. Three sets of thresholders on the measurable area, starting with 1+, then 2+, then 3+. Each is run, in order on the annotation created in the previous step.

For the final data, your 1+ area would be 1±2+ areas, 2+ would be 2±3+ areas, and 3+ would just be the 3+ area. The thresholders would all be run on the blue stain from the trichrome, so you will need to get your color vectors right. Usually that is done using single color stains, and if you have hundreds of slides, it might be worth using single color staining to get the right color vectors for your samples. Otherwise, best of luck with guesstimating.

hi, after trial and errors I decided to segment area with superpixel, and construct a classifiers that classifies a superpixel into four classes: 3+, 2+, 1+, or 0.

so to construct a classifier I annotated several area as 3+, 2+, 1+ or 0, segmented the annotations into superpixels, computed their smoothed features and intensity features, and executed ‘Train object classifier’ functionality. but it keeps producing an error message "you need to annotate objects with at least two classifications to train a classifier!’ I have no idea what I am missing here… after spending years with v.0.1.2, using the latest build of QuPath is quite demanding to me. gosh.

If you are using the latest build, the two things I would check are that the superpixels are child objects of the classified (NOT NAMED) annotations, and that there are enough superpixels. I am fairly certain I saw someone run into that error during a help session when they did not have enough detection objects.

Other than that, it would help to see any of what you are describing. The inputs into object classifier, the hierarchy, any of it.

i finally failed to tackle the error message, so tried two pixel classifiers as your initial suggestion.
because the pixel classifiers worked nicely, i annotated ROIs of images in my project, and ran a script that calls two pixel classifiers, and saves area of 0+, 1+, 2+, 3+. Below is the core part of the script.


// Remove Empty area
setColorDeconvolutionStains('{"Name" : "H-DAB default", "Stain 1" : "Hematoxylin", "Values 1" : "0.65111 0.70119 0.29049 ", "Stain 2" : "DAB", "Values 2" : "0.26917 0.56824 0.77759 ", "Background" : " 255 255 255 "}');
addPixelClassifierMeasurements("RemoveEmpty", "RemoveEmpty")
createAnnotationsFromPixelClassifier("RemoveEmpty", 0.0, 0.0, "INCLUDE_IGNORED", "SELECT_NEW")

// Remove empty areas 
anns = getAnnotationObjects()
for (an in anns) {
    type = an.getPathClass()
    if(type!=null) {
        if(type.getName()=='Region*') {

// pixel classifier 
selectObjects { p -> p.getPathClass() != null && p.isAnnotation() }
addPixelClassifierMeasurements("MTQuant", "MTQuant")
createDetectionsFromPixelClassifier("MTQuant", 0.0, 0.0, "SELECT_NEW")

// record area: 0,1,2,3,total

I discovered that for some images, the result generated by script quite differs from what I can get by manually loading the second pixel classifier (MTQuant). for others, the result is not that different, or tolerable. is it because of stain vector issue or what…? because of the difference i cannot run the script for project…

@nowhere27 I edited your post to add ``` at the top and bottom of the script so that it is formatted better (you can also select text and press the </> button in the toolbar to do this).

In what way is it different? Can you show examples?
I don’t think the stain vectors should make a difference here (but it is quite a long time since I wrote / used that code and possible I’m mistaken).

Below is four consecutive cases in a project (order: the original image, classified detection objects generated by scripting, and by manually loading pixel classifier).

1st example:

2nd example:

In these two cases, it seems as follows:
class 0 (pink) is misclassified as class 1 (skyblue)
class 1 (skyblue) as class 3 (dark blue)
class 2 (darkness between class 1 and 3) as class 3 (dark blue)
class 3 (dark blue) as class 3 (dark blue)

3rd example

here, every class is misclassified into class 2.

4th example

here, class 3 is misclassified as class 0 while other classes were correctly classified.

Thanks, have you tried removing the classifyDetectionsByCentroid lines in the script?

I’m not certain these lines are needed, and they could be messing up the classifications (since the centroid can often fall outside an object – especially if the generated objects have not been split).

1 Like

Thanks a lot!!! I just added that line because the command was there when i tried ‘create command history script’ to write my own script.

1 Like