Subdividing an existing annotation into annotations of interests

Sample image

Background

Hi fellow QuPath users/developers. I am analyzing biopsies in a canine model of volumetric muscle loss. The biopsies are taken normal to the wound bed, and thus have pretty clear “damage”, “intermediate”, and “healthy” zones (labeled necrosis, at-risk, healthy in the sample image).

These zones are more apparent on the Trichrome stain than the H&E, so I have detected cells on H&E and used an affine transform to port them over to the Trichrome image (“H&E transform” annotation), and am happy with the results of this. In parallel, I have created a pixel classifier that distinguishes:

  • Muscle
  • Necrotic Muscle
  • Edema
  • Connective Tissue
  • Background/ignore

I am now working on a way to combine all this data in the best way possible…

Challenges

I am looking to group the measured data (cell detections + pixel classifier) by biopsy zone. The way I am picturing the results is something like this:

Healthy Zone

  • % Muscle Area
    • Number of Muscle-associated cells
  • % Edema Area
    • Number of Edema-associated cells
  • % Necrosis Area
    • Number of Necrosis-associated cells

Intermediate Zone

  • % Muscle Area
    • Number of Muscle-associated cells
  • % Edema Area
    • Number of Edema-associated cells
  • % Necrosis Area
    • Number of Necrosis-associated cells

Damaged Zone

  • % Muscle Area
    • Number of Muscle-associated cells
  • % Edema Area
    • Number of Edema-associated cells
  • % Necrosis Area
    • Number of Necrosis-associated cells

Is there a way to divide the “thresholder” tool into discrete subsections? Any other ideas as to rigorously defining the three salient regions of tissues in a way QuPath can understand them?

I appreciate all input and thoughts!!

If I understand correctly…

If you are manually annotating the three zones, the rest should be fairly easy - pixel classifiers can be applied to your annotations without creating objects. Use the Measure option to create the area measurements, and the Classify option to classify your cells (based on whether they fall within one of the three classification areas).
If you are capable of creating a pixel classifier for the three areas - the analysis would go like:
Create tissue outline
Using tissue outline, use a pixel classifier to Create Objects (annotations or detection) within the tissue outline for the three zones - the classifier should only be for the three zones. Alternatively do this step manually.
Select your annotations by class (whatever the three names are) and use the pixel classifier to create measurements.
Select your cells and use the pixel classifier to classify cells.

The commands for the classify and measurement should show up in the Workflow, just make sure to select the correct objects before-hand. When you mass export the data, you may want to selectively delete the whole tissue annotation to make the exported data “cleaner.” But that is up to you.

If you manually create your zone annotations, you probably will want to use the CTRL+SHIFT mode so that the annotations do not overlap - but that will likely require removing or inverting the tissue annotation. I would recommend inverting the tissue, deleting the original tissue, then drawing your zones within the hole left by the inverted tissue annotation (Objects->Annotations->Make inverse).

1 Like

I actually have already made my pixel classifier! In fact, I followed the exact workflow you just outlined :smiley:

Sorry for not being clear about what my question is… You say “if you manually create zone annotations” which is actually what I’m trying to get at. I would love to find a way to define these annotations in an automated, unbiased way. My dream scenario would be a script that runs a basic thresholder that outlines the tissue into an annotation and then asks for some sort of user input to define the two boundaries between the three zones. But, I’m not sure if this is a realistic possibility. In the case that it isn’t, do you have any ideas for the best way to move forward? I would love to move away from the manual annotation if it is at all possible…

1 Like

Here is a picture of my pixel-classified cells, to show you where I’ve gotten to

@Research_Associate Okay, so I get what you were saying about the inverse-annotation and ctrl-shift drawing mode.

This is what I made, and I think it looks great, so thanks a lot for the suggestion :smiley:

Still interested in hearing what you or anyone else thinks about psuedo-automating this process.

2 Likes

I mean, almost anything is possible with enough coding, but there are easier and harder ways to do most things as well. The quickest semi-automated way I can think of off the top of my head would be using the polygon tool to loosely define the “middle” region.
If the orientation is consistent you can use the Y coordinates of the other two regions to figure out which is which - otherwise you may have to classify them, which would work as well.
Essentially, 1 object would be the Intersect of the user drawn polygon with the tissue, and the others would be the subtraction of the polygon with the tissue. At that point you could either use orientation or a classifier to figure out which of the two ends are which. The classifier could be pixel based, or using Add Intensity Measurements, you might be able to classify them by Mean OD of some stain. Not sure.

Whether that is all worth the time to figure out the code vs using CTRL+SHIFT depends on how many images you want to analyze.

Harder (from a coding perspective) would be drawing two lines, though there might be ways of splitting an annotation by lines, but default subtraction does not work, so I am not sure exactly how that might function.

Not sure how you could make this all unbiased though if a pixel classifier cannot define the regions.

1 Like

This would definitely work. The necrotic region is always going to be most superficial and the healthy region will always be deepest. As long as they are imaged consistently then we could say the Y components of the centroids will always be healthy<at risk<necrotic…

This is a very large study, so I think coding is the way to go here… but I frankly have no idea how I would code this ground-up. Do you have an idea for the language needed to define regions based on the intersect and difference of the annotations? Is there a list of annotation based commands somewhere that I could piece together?

Thank you!!

1 Like

I think the best bet would be to use JTS geometries - I need to run right now but I can try and look up some examples later. I think there was a post on subtracting annotations as well, though that may not be entirely what you would need.
Not sure if @petebankhead has some good examples of doing JTS manipulation lying around that he would want to recommend first.

1 Like

Another example plus the link to JTS stuff on the official docs page: Counting Cytoplasm + Nuclei labelled in different channels in QuPath - #3 by Research_Associate

Way, way more involved, but examples of both intersect and subtraction

1 Like

Well… I spent most of the morning trying to parse through all those codes. Thanks for sharing, I can tell that if I’d payed attention in AP Computer Science in High School they would’ve been exactly what I needed :stuck_out_tongue: However, they’re more than I can handle at my current skill level. I think that the ctrl-shift annotation is going to be how I move forward with this…

I actually have a follow up question regarding a new method of analysis I want to try out in this project. I’ve noticed that several of the myocytes have centralized nuclei (picture below), which is indicative of muscle remodeling and a really useful parameter to include while analyzing these samples. Has anyone attempted an analysis of this, or something similar, in QuPath?

My idea right now is to use the positive cell detection and see if there is a measurement parameter that sets these cells apart… I was thinking perhaps the Eosin Sum value in either “cell” or “nucleus”. Unfortunately, I’m at a bit of a standstill with this project because there were some problems in the image acquisition (as you may have noticed from the background color) that really interfere with the deconvolution. As such, I can’t get the cell detection where it needs to be to even try to attempt this until I get better images. But, in the meantime, I’d love to hear if there are any ideas for identifying this phenotype!

I think the Eosin sum sounds like a good start. If you have trouble with it, maybe increase the cell expansion so that border cells will include more “other stuff.”

Alternatively, you could create a pixel classifier that detects the wavy pattern of the muscle cells - then use that pixel classifier to apply measurments to the cells. Those measurements might then work for your classifier.

Third alternative could be haralick features using the Eosin channel, something might jump out, I am not sure.

1 Like

Regarding the subtraction scripting, I didn’t actually check my list of scripts:

1 Like