Same pixel classifier in different slides of same tumor

Hello to all. I started using Qupath v 0.2.0 M8 and its pixel classifier. I would like to try using the same classifier for other slides of the same tumor. But I have this doubt: the intensity of the coloring can vary slightly between two slides, is there any method to “normalize” it (on QuPath or other softwares)? thanks

Under advanced options (or feature selection?) I think I included a highly-experimental attempt at local normalisation prior to calculating features. You could try this. It can result in the background going a little crazy, but sometimes this can be overcome with more annotations.

There is not currently any other stain normalisation in QuPath. It is designed to support this in the future… but doesn’t contain any normalisation methods currently.

You can also create a training image that allows you to train the pixel classifier using regions from multiple images, trying to create a classifier that handles variations a bit better. Let me know if you want further information about this (I don’t recall if I documented it online yet).


Thank you, Pete. Tomorrow I’ll try it. (You were right, the pixel classifier is the future)

1 Like

Hello Pete,
I am currently facing the exact same problem Manuel does. More information about the training of the classifier using multiple images would be very much appreciated. I have already tested the preprocessing normalization feature, but I think there is still room for improvement in my case.
Thank you very much.

There is a brief description here.


It’s perfect! 2 slides of the same tumor, with 2 different markers:
Slide 1: stroma 54.45%;
Slide 2: stroma 54.12%
Thank you!

1 Like

Cheers, yes that’s the description I was vaguely thinking of - I forget exactly what I’ve written and where :slight_smile:

Need to update the documentation soon to bring everything together…

Thank you very much, this helped me a lot!

1 Like

Hi, I thought this would be a good place to post this question. I am using the pixel classifier, and seems to be working very well in identifying my different regions of tissues. However, I I have 56 images and I would like to run it in batch for all my images in the project.
Any suggestion?

Have you tried this?

Yes, I did, But I run into the same issue as ( It looks like peter already fixed the bug (16 days ago), with the object creating from the simple thresholder which seems to be the reason this script doesn’t work. So will keep on looking,