QuPath: Pixel classification training help

Hey QuPath Community,

I am currently attempting to use the pixel classifier to differentiate various properties of a histology slide image. It is a H&E stained slide containing collagen, fibroblast cells, as well as vascularity components.

Here is my issue: When training the classifier it will misclassify the tissue depending on which section has been trained more. These features are very similar but vary in shape. The fibroblast cells are within the veins and I believe this is complicating the classifier as it believes it is all either FB cells or the vascularity class.

Is there a way to better differentiate these objects to train the classifier to be more accurate with my tissue?
Below is the vascularity I wish the separate from the FB Cells


Here are the FB Cells:

p.s. I have attempted the same thing using the Masson’s Trichrome stained slides and was having the same issue, I think it might not like the size variability of the vascularity.

Hi Avery,
This is a very hard problem! Those tissues look so similar that it’s hard to differentiate them by eye sometimes. I’ve spent a long time looking at things like this and I’ve come up with 3 solutions, which work occasionally (not always!)

  1. Use your existing pixel classifier to create detection objects. Then, on those objects, measure intensity, texture, and shape features. Train an object classifier to differentiate “circular areas with white gaps and elongated purple spots” from “large regions with elongated purple spots”.

  2. Use a new pixel classifier to create annotations for the vessels (white regions) themselves. Also create detection objects for a combined fibroblast/endothelial cell class. Then, calculate the distance between the detection objects and the vessel class. You can set a single measurement classifier to say that any detected object that is <20 um (or whatever looks right to you) from a vessel is the Endothelial class. Or, if that’s not accurate enough, use this distance measurement as one more feature in the object classifier I described above.

  3. Superpixels! I love them and I don’t think they get enough attention. See the documentation and this informative post for more information.

All of these rely on the object classifier instead of the just the pixel classifier. The pixel classifier looks at “local texture” to make decisions. As it says in the documentation, that’s often not enough.

If you were to super zoom in to an 8 or 16 pixel region around one of those cells, you likely wouldn’t be able to tell which type of tissue you were looking at. You really need larger anatomical context to make that sort of decision, and that’s where objects win.

4 Likes