How to normalise or subtract background in QuPath with DAB only stained sections?

EDIT: I have some DAB only stained slides that have different background intensities on the tissue. My goal is to be able to quantify the number of positive pixels stained with DAB.

However, I cannot apply the same threshold to all my images for batch analysis as they have different background intensities.

For example, section A would have slightly lower DAB staining and therefore background, but section B would have a darker stain and background. So if I apply the same threshold in section A, then section B would be considered as 100% positive (including the background), just because its background is darker than section A.

Is there a way to subtract the background so I can normalise all the images? I know in ImageJ you can convert images to 16-bit grayscale and then subtract background, but I can’t seem to find anywhere in QuPath that allows you to convert images to 16-bit as well as subtract background (using the rolling ball). Or is there another way in QuPath to do this?

1 Like

Clarification: is the background on the whole slide, the tissue, some aspect of the staining, something else?

Background as a general darkening of the image can be adjusted for to some extent by the Background in the Image tab (auto calculated when Preprocessing → Estimate stain vectors).

Tissue staining is more difficult since it is generally inconsistent and might require something more like ImageJ background subtraction.


The background is on the tissue, and like you said it’s inconsistent throughout the tissue on the same sample as well as across different samples. I tried doing Preprocessing → Estimate stain vectors on one slide but when I applied the same settings to another sample, it just didn’t work, because it had different background noise.

Yeah I was hoping QuPath would have the background subtraction feature that ImageJ does, because I cannot open my images in ImageJ as they are svs, and I tried downloading SlideJ but that didn’t seem to work either.

Hi @kitcat, QuPath is quite different from ImageJ in some fundamental ways – the most important one here is that you can’t change the pixel values in QuPath (e.g. with background subtraction). This allows it to work with much larger images efficiently, since it can always return to the file to request the pixels if it needs to.

However, background subtraction can be wrapped up into other commands that operate on part of an image at a time. Cell detection offers this.

For thresholding, there is a ‘Prefilter’ option; the Laplacian of Gaussian selection will effectively smooth the image and give it a mean of zero. There’s more info on how it works here. It’s most likely to be useful if the ‘true’ structures are all similarly sized, and either blob-liked or at least quite thin.

Alternatively, you can use pixel classification and select multiple features – trying to train QuPath to effectively ignore the background.

I can’t be sure either of these will give the results you need, but they are the options I would try.


Hi Pete,

Thanks a lot for that, I’ll give your recommendations a try!

1 Like

Correct, you would be using this per slide to adjust the background.

1 Like

Just a questions and a few comments:

Why is a stain vector important here?
It is just a single stain. Color deconvolution doesn’t make sense for a single stain.

I have compared the color vector from image A and B in ImageJ:

  • Convert to absorbance aR|aG|aB
  • Divide i.e. aG/aR and aB/aR

The mean of this absorbance ratio images will show the relation between aR|aG|aB. That is what a stain vector describes.
The mean can be calculated in difference Rois.

All measurement shows that the stain vectors for A and B are very close to equal.

The approach to use different stain vectors and apply color deconvolution to perform a kind of ‘background’ subtraction or normalization is questionable.

By the way …

  1. In this transmitted light image the background is bright and the signal is colored and darker than the ‘white’ background. Both images A and B or not perfectly white balanced but very similar. The white (transmitted light ) background is similar in A and B.
    The strong staining in image B is not what is usually called ‘background’. It is more something like a change of staining intensity which can have various causes.

  2. And … have a look onto image aG_divided_by_aR (of image B):


It shows that image B contains some ‘artefacts’ of unknown source.
It would be interesting to know how this image has been created.

  1. The absorbance ratio images should ideally (assuming Bouguer-Lambert-Beer and linear relation between absorbance and concentration) structureless with only a constant value.
    This shows that the linear assumption is not valid. This can be also seen if the aR|aG|aB absorbance values are displayed in a 3D viewer.

I think @petebankhead 's idea of

is something that can help here.

I find this post interesting. Would it be possible to have access to the original images?


The image wasn’t posted when I responded, I had made the assumption that the DAB was actually DAB+HTX. Also, that had nothing to do with the stain vectors themselves, but the Background setting that is bundled in, which can be useful if the entire slide is darker.


Sorry @Research_Associate that was my fault for not attaching the image at first!

Okay, thanks! Do you know if there is there a way to batch process this, or do I need to manually do this for each image since I can’t use the same settings? I’ve been using the “Auto” function whenever I “Estimate stain vectors”, and the script doesn’t come up with anything about clicking Auto.

I’m also curious, like @phaub said does colour deconvolution still work if it’s a single stain? When I looked at the channels after colour deconvolution, some of the DAB stainings were in the Haemotoxylin channel.

Sorry I’m still new to this, how do you do that?

Sure, let me know if this link works or not


Having looked at the images, I don’t think the background correction really applies. That is a per image kind of thing, like if the entire slide was darker, or the exposure time was shorter so that the image appeared dimmer.
There are ways to automate it, sort of, but they are involved, and would not solve your staining issue.

With only a single stain, the deconvolution will not accomplish much, even though QuPath will show the stain separation. Almost anything dark enough will show up in both channels - if you think about it, once you have black, you can’t tell what the stains were that contributed to black. Anything close enough to black suffers from the same problem.


Yes. Perfect. Thank you.
But this are not the originials, right?
The originals were scanned with a commercial scanner and then opened and rotated with QuPath.
I’m writing this because I wonder where the strange artifacts come from:

This checkerboard patterns can be found all over both images and disturbing. In the following I have downsampled the images to half of their original size to reduced the negative effects of this patterns.

Since it is unclear to me what you want to measure exactly I would like to use ImageJ/Fiji first. Let’s do the QuPath processing in the next step.
I also would like to skip absorbance, stain vector, color deconvolution etc. stuff for now. I assume that there is a more simple solution. Let’s see …

After converting your original images A and B to RGB stacks it can be seen that the strongest effect of the different staining itensity is visible in the green and blue channel. (This could be a result of the characteristics of the DAB absorbance.)
The red channel of both images are more similar than the green or blue channels.
Therefore I start a first approach by using the red channel, reduce the image size to remove the pattern, apply a local threshold and overlay the thresholded image onto the red channel.
The result look like this:

If this looks interesting to you, then apply the following macro to your original image (your link) in Fiji !!!
(It is important to use Fiji not plain ImageJ because Fiji has the inbuild AutoLocalThreshold functions.)

// Macro : DetectPositiveCells.ijm

title = getTitle();

run("RGB Stack");
run("Delete Slice");
run("Delete Slice");

w05 = getWidth()/2;
h05 = getHeight()/2;
run("Size...", "width=" + w05 + " height=" + h05 + " depth=1 constrain average interpolation=Bicubic");

run("Duplicate...", "title=threshold");

// For more info regardin 'Auto Local Threshold' see
//run("Auto Local Threshold", "method=Sauvola radius=30 parameter_1=0 parameter_2=0");
run("Auto Local Threshold", "method=Phansalkar radius=30 parameter_1=0 parameter_2=0.6");


run("Add Slice");

run("Make Composite", "display=Composite");


showMessage("Macro", "Done");

You can use/test different local threshold methods and vary the parameters settings.

Please let us know if this gives meaningfull results and if this are the objects you want to quatify.


Yes that’s correct, I scanned on a commercial slide scanner, rotated it, and sent a section of the slide to ImageJ and saved the images as TIFF files. Sorry I didn’t realise it would create an artifact!
I have now added the 2 original images to the same folder/link, but those files don’t open in Fiji for me.

Oh yes thank you that is great! The star-shaped white structures (astrocytes) are the structures I want to quantify

1 Like

Well, right off the bat, those same structures are apparently in the original SVS image. Which is… interesting. Do you know what compression was used? I am very accustomed to seeing JPEG compression but this is… new. JPEG2000 maybe? JPEGXR? It even shows up in the white background, so pretty sure it is some kind of compression artifact.

This is what a fairly quick pixel classifier was able to pick up. I am sure it could be improved.

Weka could probably achieve similar or better on a smaller subsets of the images, or you might be able to call @phaub’s script above on exported regions. Given how finely detailed the objects are that you are looking for (and close to the resolution limit of your image), I would not recommend rotating the image. Even the compression is making the results sub-optimal.
Or, maybe the script could be run directly through the QuPath ImageJ macro runner, not sure.

As objects, no size thresholding, lots of fragmentation.

Just messing around. End of the day.

If you can get a threshold-based script to work, I would generally trust that far more. Classifiers can have surprising results in areas that are not like ones where you trained them. Still, it might be an option if other methods end up being problematic.


That is weird and annoying, unfortunately, I don’t know what compression was used sorry.

Were the artifacts worse when I cropped the images or the same?

My biggest concern is that the thresholding properties of each image would vary due to the different stain intensities of the background tissue

… and the artifacts interferes the digital evaluation (at least on the highest resolution).
It would be interesting to know what causes this artifacts and if there is way to avoid them (i.e. by deactivating some preprocessing in the scanner).

I think so too.

That is the reason why I have tried to use a local threshold method. It’s unsure if this is really the ‘final’ answer. But I thought it is worth to try.
Maybe you can apply the above ImageJ macro to your images A and B and compare the structures segmented by local thresholding. Do you think the local threshold delivers acceptable and comparable segmentation in both images despite the different stain intensities?

In the meantime I’m experimenting on a QuPath script which add and displays some constructed channels, here in your case it adds an absorbance channel and a channel showing the result of local thresholding.

Here some screenshots as a first impression:

Both viewers display the same slide (synchronized).
The upper viewer displays the Red_Absorbance (in pseudo_Brown color) plus the local threshold (method similar to the IJ macro, in magenta color).
The lower viewer shows the original image.

The solution is not finished yet.
Until now the constructed channels can be displayed and/or saved. It is not possible to interactively work with this channel before saving them into a new image.
The script utilizes @haesleinhuepf 's cluPath extension intensively and therefore the machine running this script has to be ‘clij-capable’.

more to come …

Most important: @kitcat Please apply the IJ macro to some test images and check whether the local threshold approach provides acceptable results.


Not sure, I did not keep the old images. They were very frequent in the SVS files so I do not think they were worse in the .tif files.

1 Like

It is definitely something you should look for.

1 Like

I’ll try to see if anyone else knows in the facility!

Wow thanks, these are really good, from what I can see it outlines exactly what I want. If there was a way to do the batch analysis with QuPath that would be so much better than cropping and exporting each image to ImageJ. Especially since I cannot analyse the whole section with ImageJ.

The above ImageJ macro is great, it did a really good job detecting only the structures, regardless of the different stain intensities!

Hi @kitcat the script posted in

will probably help you to ‘solve’ the pattern issue.

1 Like

Oh wonderful, thank you so much for doing this @phaub, as well as @Research_Associate and @petebankhead for helping! :slightly_smiling_face:

1 Like