Exporting tiles of rendered fluorescence images with QuPath

Dear experts,

I’m trying to (batch-) export tiles from multichannel fluorescence images (different proximity ligation assay dots, DAPI) in order to use these tiles to train a neural net to recognize the dots. The tiles should be rendered Images, as I intend to train the network on false colours (e. g. red dots vs. green dots).
I am not familiar with groovy. Fortunately until now I could always find a right script on the excellent Pete’s QuPath-webpage or here in forum for which I am very thankful to the authors of the scripts.
This time I also found a Pete’s script which generates whole size rendered images (Script to export a rendered (RGB) image in QuPath v0.2.0 · GitHub). This script which works fine with QuPath 0.2.3. However, I could not figure out, how to change the script to get small tiles (e.g. 256x256) of the manually defined larger regions (e.g. rectangle annotations or tissue outlines).

Could anyone be so kind to advise me on how to change the above mentioned script? Or should I better take the whole size images and tile them with another software?

I don’t see renderedImageServer as an import option to the tileExporter, so one workaround for your rather specific use case might be converting the entire image into a rendered RGB image first, as an OME.TIFF if it is large, and then using the tile exporter in it’s place.

I am not sure if the writeImage would need to be replaced (at the end of your linked script) with this as well, but replacing it with the code below should guarantee even large images being written out.

println 'Writing OME-TIFF'
new OMEPyramidWriter.Builder(server)
        .parallelize()
        .tileSize(512)
        .scaledDownsampling(1, 4)
        .build()
        .writePyramid(path)

Then use the standard tileExporter on the new, rendered image, as shown here: Exporting images — QuPath 0.2.3 documentation

1 Like

I’m not sure that’s a good idea – the reason it’s harder to export rendered images in QuPath is that the appearance can be quite different depending upon the viewer settings (e.g. LUT colors, brightness/contrast settings). I think this would be an extra source of variation during the export phase.

I’d recommend exporting raw tiles and training on that instead. If you apply false colors, I think you should really do it later (e.g. in Python), using whatever approach you intend to apply when you finally deploy the network.

2 Likes

Thank you very much for the quick response!

Interestingly my original image was an ome-tiff already. I produced it from a mrxs-file using the Glencoe Software converter. It was the only way to get channels in QuPath strait due to the nature of the mrxs format (I hate mrxs!). Tiling this image with the tileExporter resulted in “true” gray scale tiles.

Your changes to the rendered image-script worked after I realized to add “import qupath.lib.images.writers.ome.OMEPyramidWriter” at the beginning of the script :slight_smile: Then the script produced indeed a very large ome.tiff. Tiling this image resulted in false colours tiles (yahoo!) but the resolution was low. I suppose that I have to play with the “double downsample” (which was set to 10).

2 Likes

HI Pete,
first, I want to thank you for the excellent QuPath software and for the time and effort you are spending in improving and developing it!

A very long text follows. I am sorry for this but I could not explain my decision to use false colours otherwise.

I thought about the false colours as an error source also. Unfortunately my PLA dots (Duolink) are very small and very hard to distinguish on the raw tiles. The only easy way we found to distinguish between true signal and unspecific dots is that the true dots are red (in our case) and the “false” dots have an additional background green fluorescence. Hence the false dots appear orange on the false colours tiles while the true dots remain bright red. If I did the training on grey scale images I would need to train on green channel grey scale and red channel grey scale and then use another script (I would use R and spatstat package) to “delete” dots in red channel which have dots in the corresponding green channel image on the same place…

As I only wished to count the true dots (to compare it with NC as Duolink produces some true but unspecific dots) I first went for the subcellular detection and classifier training in QuPath. But this did not work as a good portion of dots between the detected cells weren’t detected (the dots indicate the PD1-PL1 interaction between the cells). So I thought about using a YOLO-network to train on both classes of dots (i.e. red and orange).

However, as my programming abilities are limited to R and YOLO configuration I might deal with my problem of dot counting as somebody whose only tool is a hammer…

BTW I’ve only got DAPI gray scale tiles by running the tileExporter as it is.

2 Likes

Thanks @SeS :slight_smile:

I’ve not used YOLO myself, but I remain a bit skeptical… I think that the brightness/contrast settings applied within the QuPath viewer will incorporating background subtraction and thresholding in a very sneaky way that becomes impossible to control for later.

Often for an AI/deep learning-based approach you’d convert your image to 32-bit floating point anyway and normalize at that point (by subtracting the mean & dividing by the standard deviation, or by normalizing using the min/max to the range 0–1, for example). But that could be hard to get working well if your images are large and your spots are small.

My feeling is that this is a job best left to conventional image processing – using the kinds of methods I described in my old ImageJ handbook, written pre-QuPath.

For spot-counting, there are some good established approaches: mostly using a difference of Gaussians or laplacian of Gaussians filter (for all practical purposes, almost the same thing), or determinant of Hessians filter (much more complicated). However in both cases figuring out a suitable threshold method can be a real pain, given that images are almost always highly-variable.

If you use ImageJ/Fiji, the Process → Find Maxima… command can be wonderful. If you use R, then EBImage may help (although I’ve not used it myself, as I am terrible at R).

3 Likes

Thanks Pete,

As I am new to advanced image analysis, I still have to figure out how to use tools, which seem to be common knowledge for an expert :slight_smile: I will first try to use Fiji or EBImage to count my red dots and subtract those which have an additional green channel before stepping onto unsafe terrain with false colours. May be I will be able to adopt the method described here https://www.researchgate.net/post/How-to-count-cells-that-are-both-GFP-and-dapi-positive

if I’ll end up to use raw tiles in ImageJ to classify and count my dots I could choose “Analyze” → “Tiles & s…”. → “Create tiles” and then proceed in ImageJ without saving the tiles as separate images. But if I decided to use EBImage I would have to save all the tiles separately. Is there a way to get the TileExporter to produce tiles of all channels? Now it produces the tiles of one channel only which happens to be DAPI in my case…

Exactly what script are you using? The TileExporter should be capable of exporting multiple channels.

1 Like

Sorry Pete, my fault. I now loaded a tile image into the QuPath and it has indeed all my 3-channels. I thought that tile exporter exports simple (1-channel) images. Therefore I was surprised not being able to set the format to other than ‘.tif’/’.ome.tif’ while exporting tiles from fluorescence images. In contrast I could use all formats for tiling “normal” IHC scans. It’s because only tif supports multichannel! (so I’m learning something at least…)

I’m using this script: Exporting images — QuPath 0.2.3 documentation

1 Like

Wincing You might be able to do this using the Subcellular detection feature in QuPath (or the ImageJ Macro runner running across QuPath tiles). Once you have the ROIs in QuPath, you can Add Measurements to include the mean red and green intensities. Then, I am assuming, there is some red/green ratio over which you would consider the point “real” vs “not real.”
Remove all of the not-real spots… and analyze.

And thanks @petebankhead, I was a little bit leery about the whole approach, but was curious about the technical aspect. Also figured the deep learning model could figure out the red vs red+green… though yeah, that should be easy enough through thresholding as well.

1 Like

I don’t want to claim to be an expert, but for what it’s worth I personally find pretty much everything in bioimage analysis difficult – regardless of whether there are established techniques that seem like they should do the job easily :slight_smile:

(And a lot of it comes down to thresholds failing on images I haven’t looked at yet…)

2 Likes

I tried the Subcellular detection but a great proportion of dots weren’t detected as they laid exactly between the cell outlines produced by QuPaths regardless of the settings I tried to change to include them into the “cytoplasm”.
I came to using YOLO for cell detection while trying to detect and classify cells on images of IHC (DAB) stainings. These originated from the routine diagnostics and had all sorts of artefacats which made the normal cell detection/classification in QuPath and in a commercial software available in our institute unreliable. However QuPath would do the job better as the expensive commercial counterpart!
I did not converted images in the way Pete described above more due to my naivety as by intention. However YOLO did surprisingly well after training on only about 300 images with 20-30 cells each (80-90% mean average precision) but (supposedly) will need some few thousands images to reach more than 90%. I used the “first success”-model to detect cells on those additional images, converted detection bounding boxes into annotations and have now to correct the errors of the model, which is much less work as to annotate de novo.
As I’m still doing this tedious job, I thought to annotate some 300 IF-stainings additionally wont do me much more harm. It might really be that variance due to false colours introduced in the viewer wont negatively affect detection by YOLO as the dots will remain either some sort of red or red-green which is a quite obvious difference. However if some part of dots will lay in-between I risk end up re-checking those on the raw images which will take too much time…
Nevertheless I thank you sincerely for the suggestion of how to change the rendered image export script as I might need it in future.
I will try to figure out how to use the red channel in EBImage to create a mask of the red dots of desired shape and size and to use it to check whether the intensity above some threshold is present in the green channel which would mean “false” dots.

1 Like

You increased the Cell Expansion so that all of the cells are pressed up against each other?
Otherwise, if you do not need the cell seg, you can take very large areas, convert them a “cell” and run the detection in that. But that gets scripty.

The simple thresholder might also be an option if your spots are sparse. You get the same sorts of outlines, and can add measurements in similar ways.

Anyway, just other options.

From a … “have experienced difficulties using thresholds like this” kind of perspective, I would strongly recommend using the ratio of the intensities rather than the green intensity itself. Spots that are more in or out of focus will have brighter or dimmer green and red to approximately the same degree.

If the red/green overlap is actually bleedthrough or some imaging related artifact, you might also look into linear unmixing.

1 Like

Yes, but it did not help.

I’ll definitely give it a try, although I have rather a lot of dots and even more background/“false dots”.

Thank you for the advise! I will see how to implement this in EBImage.

I don’t think, I have a bleed-through as my green channel detects only autofluorescence (there is no green fluorophore). It is rather a Duolink-chemistry related artefact.

1 Like