Training QuPath to detect tumour vs stroma in whole slides pictures


I am new to image analysis and I am trying to analyse whole slides from tumours in QuPath v0.2.3. I have already watched @petebankhead tutorials on youtube but still struggling to build an algorithm and train QuPath.

What I have done was to select an area and to annotate two regions (tumour / stroma). Cell detection and positive DAB cells works fine in the annotated region however it does not differentiate between tumour and stroma. Moreover, I cannot make it to QuPath to “learn” where in the remaining slide are tumour and stroma areas.

An additional issue might be the fact that I am working in a regular MacBook Pro and each image is 1Gb on average…

I would really be grateful if someone could help me. Cheers,


Hi Rita,

I think you should first let QuPath annotate tumor versus stroma.
You can do this using the steps on this page by using SLICs:

In short, this makes qupath divide your slide into a grid.(SLICs / superpixels)
Then you generate a classifier by first manually annotating some stroma and tumor areas.
Then you let QuPath classify each SLIC as tumor or stroma
These SLICs can then be transformed into annotations.
Within each annotation, you can perform cell detection and analyses, etc.

Best, Justin

1 Like

If you have annotated two example areas, you might be able to use the pixel classifier to generate annotation areas.

Pete discusses that in the link in what I think is a similar context. Whether that or superpixels work better for your project is project dependent. Inputs or selection of features can have a major impact if staining can be independent of tumor/stroma state.

For example, if some of your tumor can be highly stained for DAB while other tumor areas can be negative for DAB, you would need to make sure your classifier did not pay too much attention to DAB stain intensity.

Also, I recommend creating a duplicate image or even duplicate project for training classifiers, so you don’t overwrite your training regions in the future.

1 Like

Thank you so much @JSNL and @Research_Associate for the prompt and useful replies.

Although now I have managed to get QuPath to identify the different regions based on my annotations, I cannot find a way to have the % of positive cells within these identified regions.
I have attached my training image: tumour cells are more less concentrated in the stained areas. I would like QuPath to identify the tumour areas (based on the two annotations I have made) and then give me the % of positive cells (in this case, that would be 100%). Is there any way to do it from the pixel classification without having to manually select the tumour area?

Screen Shot 2020-12-03 at 15.39.13|690x491

Kind regards,


If you run positive cell detection within annotations generated by the pixel classifier, it should automatically generate measurements per annotation that indicate what percentage of detected cells are positive or negative. If that is not what you mean, could you elaborate on how you expect to determine which cells are positive?

1 Like

General question related to training classifiers. Mainly on how the computation works. When you already have annotated superpixels, lets say 100, does it make a difference in the “weight” for calculations if they are contained in 1 annotation or in 100 different annotations? Thanks, Juan