QuPath converting RGB to CMY


I have a triple staining with chromogens in Cyan Magenta and Yellow. To start separating these stains I tried to create three new channels (cyan, magenta and yellow) from the RGB image by using stainvectors. (maybe this already is not the right way to do this?) I would expect to make: Yellow = [0.5 0.5 0] ; Cyan = [0 0.5 0.5] ; Magenta= [0.5 0 0.5]

setColorDeconvolutionStains('{"Name" : "CMY", "Stain 1" : "Yellow", "Values 1" : "0.5 0.5 0 ", "Stain 2" : "Cyan", "Values 2" : "0 0.5 0.5",  "Background" : "255 255 255 "}');

They all normalize the 0.5 to 0.707 to have length one, but I am doubting if this is the correct way, as the residual (would be Magenta) becomes [0.577 -0.577 0.577].

It would be nice to have the colors display their own color as well. After the above command yellow is blue ; cyan is red ; residual is green.

Coming from fluorescence microscopy, QuPath and digital pathology are very new to me. Thanks in advance for the help!


I’d recommend trying to set the stain vectors by example, at least at first: https://qupath.readthedocs.io/en/latest/docs/tutorials/separating_stains.html#brightfield-images

Since color deconvolution effectively involves a kind of image inversion along the way (as well as logarithms), the stain vectors themselves may not be terribly intuitive.

QuPath will try to provide some vaguely sensible color mapping to the stain color that will be separated (albeit it really just for visualization)… so if the colors you see are far from the colors that are present in the image, probably the vectors are wrong.

(There’s a bit of debate around how the residual could/should be determined… with the color displayed there probably not very useful at all.)

Yep, for reference, Cyan is more of a 1,0,0 and yellow is 0, 0, 1

Thank you very much. I read the 2001 ruifrok paper and understand now better why there is a difference between the average value of a single stain in an image, and the final vector that separates that stain from the image.

After some trials @bramvdbroek and me found a way to separate the image into four channels in ImageJ. We do two different color deconvolutions, A and B.

The stain vectors we get by selecting cells that have only one color.

  1. Hematoxylin
  2. Cyan
  3. Yellow

In B we only change Cyan to Magenta

  1. Hematoxylin
  2. Magenta
  3. Yellow

As a result, the Haematoxillin channel gets either Cyan or Magenta as extra. But Cyan and Magenta don’t overlap. So we can separate them and get a Haematoxillin only signal. But how would you do this in QuPath?

Assuming that those are not stains your cell detection depends upon, you can add some of those measurements later using Add intensity features.
For each image, apply one color transform, add features, then apply the second transform, and add more color features. I did this often enough with 3-5plex brightfield stains I automated the process to give cytoplasmic mean ODs off of set of stain vectors. Cytoplasmic mean OD is the one thing the Add intensity features will not give you by default.

In case you are asking about the process, you can use Estimate stain vectors to set two of the channels and leave the third one free (probably the most accurate), or you can switch your image type to Brightfield(Other) in order to set all three channels. I would normally swap between using Estimate stain vectors to figure out what the four+ stain vectors were, and then manually enter them into Brightfield(Other) stain vectors.

Unfortunately the HE stain is the one that our cell detection depends upon. We have found a few ways to seperate the Hematoxylin from the rest of the image.

  1. Subtraction of the difference between channel 1A and 1B. (here)
  2. The average of two bleedthrough corrections. Using channel 1A - 2B, and 2A - 1B. (here)

Here is a small part of one of the images the entire image is 223.4GB, and there are about 20 images. So QuPath is ideal for working with these large datasets.

The problem is that I don’t see how to do any of these two analysis in QuPath. But maybe that is even not the best way to go. I could automatically send smal sections of the image to ImageJ, and get the image with the hematoxylin intensity from ImageJ back to QuPath. But I only see how to get ROI’s back to QuPath, not a new image.
So I started doing stardist in ImageJ with the intention to export the ROI’s, but it turns out the build-in ImageJ of QuPath cannot do stardist.

What would be the best way to do this?

If StarDist is working for you, have you tried building QuPath to run StarDist?

1 Like

I have, and normally I use StarDist in QuPath for my segmentation. But I cannot get the seperated hematoxylin intensity back to QuPath from ImageJ.

Ah, sounds like a more complicated process than I was thinking. I thought you were simply looking for a normal hematoxylin channel with deconvolution, but more complicated steps will require a more complicated script.

If you cannot use ImageOps to recreate the same sort of HTX channel within QuPath (look at some of the image writing options), you may want to write out tiles in ImageJ with the deconvolution, and then stitch those together as a whole slide image (possibly using QuPath). Maybe @petebankhead will have a better idea. Would it be possible to run QuPath’s StarDist on an image sent to QuPath’s ImageJ and manipulated, followed by copying those ROIs back into the whole slide image in QuPath?

@rharkes there isn’t really a way to send the pixels back to QuPath; there would be nowhere to store them.

The main ideas of QuPath are outlined here, and are quite different from ImageJ. Basically, pixels are always just held temporarily – once you get pixels you really need to follow any processing steps to their conclusion (i.e. generate objects, make measurements).

Potentially you could train your own StarDist model specifically to work on the RGB data of your images. It would be quite a bit of work outside of QuPath… but at least then it is adapted to specifically your kind of images.

Alternatively, you can apply QuPath to a color-deconvolved image (although just using one set of stain vectors, without the extra steps you describe – which would be a lot more complicated to include). See

The last option is that you could export your image tiles from QuPath as TIFF images, and then run StarDist through Fiji. This is more awkward in a workflow, but gives more freedom to explore things to see if they work.

1 Like