Brightfield colocalization analysis in qupath or fiji

HI,

How can I analyze colocalization in DAB or H &E stained sections (not fluorescently stained please) in QuPath? Or FIJI?

Thanks

Remmy

Can you define more precisely what you mean by ‘colocalization’ (ideally with example images)?

Colocalization in this sense means a segmented object on one channel having a specified number of overlapping pixels with another segmented object on the other channel.

I am attaching a brightfield image of 3 stains blue, red and brown.
Coloca-Blue%2C%20Red%2C%20Brown

You can separate up to three stains in both QuPath and Fiji using color deconvolution, although typically the separation is not terribly clean. However if it worked well enough, you could potentially threshold each stain and count overlapping pixels… but this is just area/overlap based and ignores where the pixels occur relative to objects.

If you require object segmentation and measuring object overlap then the task is considerably more complex and needs a much more precise definition of what exactly is required and what objects you’re talking about.

In any case, I don’t think there’s any built-in method that will do what you want in QuPath. It sounds like you require something specific, for which ImageJ/Fiji may be more appropriate. If you get it working in ImageJ and need to apply it to whole slide images then it might be possible to run your ImageJ plugin or macro through QuPath in the end.

If you are interested in overlap, I can see a few possible options.

  1. Detect the objects using one channel, then measure the amount of the second channel/stain within the objects detected in the first channel. This is primarily useful if one of your stains is in 100% of the objects (say, hematoxylin).
  2. Use a variety of detection techniques to generate objects, then find the overlapping area between them. This will require some scripting, as can be seen here. That script specifically can’t be used on your image directly, but it should give you an idea of what can be accomplished.
  3. If I have time at some point, I might be able to adapt my colocalization or R^2 scripts to work for brightfield deconvolved channels. This will be very tricky for three stains in a brightfield image, unless you have taken the images with something better than an RGB camera (which only has 3 channels).
  4. Pixel classifier in QuPath 0.2.0. This is an experimental feature, is currently being changed, and cannot be scripted across many images. However, if you want to test something similar to the Weka pixel classifier, it might be an option to play with.
  5. As Pete says, use a macro that can be run through ImageJ on tiles in you image… though if your image is small (single field of view versus whole slide) it might be more efficient to do the whole thing in FIJI.

Have you been able to segment your objects, and if so how? What do they look like? The single image above is very small and I can’t really tell what it is showing, though I do think I see H&E&DAB.

One final option, if one stain is completely contained within another stained object, you might consider subcellular detections.

Cheers!

Thanks for contribution @Research_Associate.
I think I may explore option 1. But, this means there might be no way of telling apart what is true colocalization from chance colocalization; I mean something similar to Costes’ P value. Isn’t it?

Thanks again.

I am pretty sure you need accurate intensity measurements for something like that, which rules out chromogenic stains. If you want that kind of colocalization information, use fluorescence. As I understand it, even with stain vectors and color deconvolution, your measurements are no longer accurate where stains overlap in any quantitative way.