Training QuPath to detect tumour vs stroma in whole slides pictures


I am new to image analysis and I am trying to analyse whole slides from tumours in QuPath v0.2.3. I have already watched @petebankhead tutorials on youtube but still struggling to build an algorithm and train QuPath.

What I have done was to select an area and to annotate two regions (tumour / stroma). Cell detection and positive DAB cells works fine in the annotated region however it does not differentiate between tumour and stroma. Moreover, I cannot make it to QuPath to “learn” where in the remaining slide are tumour and stroma areas.

An additional issue might be the fact that I am working in a regular MacBook Pro and each image is 1Gb on average…

I would really be grateful if someone could help me. Cheers,


Hi Rita,

I think you should first let QuPath annotate tumor versus stroma.
You can do this using the steps on this page by using SLICs:

In short, this makes qupath divide your slide into a grid.(SLICs / superpixels)
Then you generate a classifier by first manually annotating some stroma and tumor areas.
Then you let QuPath classify each SLIC as tumor or stroma
These SLICs can then be transformed into annotations.
Within each annotation, you can perform cell detection and analyses, etc.

Best, Justin

1 Like

If you have annotated two example areas, you might be able to use the pixel classifier to generate annotation areas.

Pete discusses that in the link in what I think is a similar context. Whether that or superpixels work better for your project is project dependent. Inputs or selection of features can have a major impact if staining can be independent of tumor/stroma state.

For example, if some of your tumor can be highly stained for DAB while other tumor areas can be negative for DAB, you would need to make sure your classifier did not pay too much attention to DAB stain intensity.

Also, I recommend creating a duplicate image or even duplicate project for training classifiers, so you don’t overwrite your training regions in the future.

1 Like

Thank you so much @JSNL and @Research_Associate for the prompt and useful replies.

Although now I have managed to get QuPath to identify the different regions based on my annotations, I cannot find a way to have the % of positive cells within these identified regions.
I have attached my training image: tumour cells are more less concentrated in the stained areas. I would like QuPath to identify the tumour areas (based on the two annotations I have made) and then give me the % of positive cells (in this case, that would be 100%). Is there any way to do it from the pixel classification without having to manually select the tumour area?

Screen Shot 2020-12-03 at 15.39.13|690x491

Kind regards,


If you run positive cell detection within annotations generated by the pixel classifier, it should automatically generate measurements per annotation that indicate what percentage of detected cells are positive or negative. If that is not what you mean, could you elaborate on how you expect to determine which cells are positive?

1 Like

General question related to training classifiers. Mainly on how the computation works. When you already have annotated superpixels, lets say 100, does it make a difference in the “weight” for calculations if they are contained in 1 annotation or in 100 different annotations? Thanks, Juan

Hi all, I really need some help with segmentation in QuPath - maybe I am doing something wrong but here is what is happening:
I have whole slide images but I am working now in random areas of each slide (my computer cannot handle more), in this sequence (after having trained the pixel classifier to detect Tumour vs stroma):

  1. 10 random annotations (squares);
  2. merge the annotations;
  3. load the pixel classifier and generate annotations for tumour and stroma - I end up with 3 annotations (the merged one, 1 for stroma and another for tumour);
  4. select the deprecated “Cell + membrane detection” (I am working with a membrane marker).

At this point, the 3 annotations desapear and I end up with only one annotation again that totally ignores the pixel classifier.

Any idea what is going on?

Thanks for all the help!

Your cell detection is overwriting the annotations - it clears out the hierarchy of the parent annotation before running.

You might try running the cell detection first, then run the pixel classifier (make sure the select the annotation, not the cells) to create the annotations.

At that point you should have all of the objects, and just need to run the script resolveHierarchy()
Or Objects->Annotations->Resolve hierarchy if you want to use the menus.

1 Like