Issues with exporting masks from QuPath

I am trying to apply U-Net multi-class segmentation procedure on whole slide histopathology brightfield images that were annotated in QuPath by several tissue region categories such as tumor, normal, stroma etc.

See example of multi class annotations (showing only a small subset of a slide):
Screen Shot 2020-05-22 at 7.25.29 PM

I need to bring the QuPath annotations into a single whole slide binary multi-channel PNG image where each tissue region category has its mask coded (for all the regions/elements in the slide) in the corresponding channel, in addition to a background channel annotating all the non annotated pixels. Alternatively, I would have a Python Numpy array coding the same annotation, but these two formats are exchangeable. Alternatively, XML output of the annotations should work too.

The problem is that Python access to QuPath project files is not possible, and QuPath groovy scripts focus on tiles or individual ROIs instead of the whole image.

Is there an existing script performing such task or can you advise how to arrive at one that does so?


The reason the Groovy scripts focus on tiles is that a whole slide image is typically just far too big to export as a PNG – at least at full resolution.

This page describes how to export an ImageServer in different ways, at different resolutions:

The exact same scripting approaches can be applied to export the full image for the LabeledImageServer created in exporting annotations section.

But if your image is too large, the export will either fail (because of memory/array length issues) or be totally impractical (because a PNG is not a tiled, pyramidal image).

The only export format that should work for whole slide images in general through QuPath is currently .ome.tif – because it does support writing tiled, pyramidal images.

XML is a bit too vague… the exporting annotations page says why it is not supported. But GeoJSON is well-defined, and you can export as that.

Thanks but I don’t think the resolution is the problem - the output would be just a binary image. Anyway I could use a lower resolution representation if resolution is the only problem. Can you clarify how the exporting tiles script from the link you provided can be used to export the whole image in intermediate resolution? Or how can I export the .ome.tif you mentioned?


I’ve updated the documentation now to provide an example of this:

Thank you, that works well for RGB downsampled image coding the regions with different colors. However, I get the following error when I set .multichannelOutput(true) to get a multi-channel (channel per annotation group) image:

ERROR: IOException at line 28: Unable to write /Users/Assaf/Dropbox/shared_imaging/Tissue_Segmentation/export/Li63NDCLAMP-labels.png!  No compatible writer found.

ERROR: Script error (IOException)
    at qupath.lib.images.writers.ImageWriterTools.writeImage(
    at qupath.lib.scripting.QP.writeImage(
    at qupath.lib.scripting.QP$writeImage$0.callStatic(Unknown Source)
    at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(
    at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(
    at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(
    at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(
    at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(
    at qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(
    at qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(
    at qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(
    at qupath.lib.gui.scripting.DefaultScriptEditor$
    at java.base/java.util.concurrent.Executors$ Source)
    at java.base/ Source)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.base/java.util.concurrent.ThreadPoolExecutor$ Source)
    at java.base/ Source)

How can I resolve this?


The PNG writer can’t support an arbitrary number of channels.
Change the file extension to either .tif or .ome.tif.

Thank you @petebankhead, I’ve been using it and it works well!
I just wanted to clarify the export order consequence. I have multiple people annotating the images and they’re probably using different strategies and order to annotate different regions, but the overall guideline is starting from the non tissue area annotation, then smallest to largest annotation (small structures first then tumor/normal/stroma etc). So I have two questions:

  1. If I use the following with multichannelOutput(true) option, am I correct that the labels coming later in the list are going to take over the pixels annotated by ones earlier in the list? (e.g. if a pixel is annotated with both normal and stroma, that pixel will have a value of 255 only in the third channel corresponding to stroma and not in the first channel corresponding to normal?)
  .addLabel('Normal', 0)
  .addLabel('Tumor', 1)
  .addLabel('Stroma', 2)
  .addLabel('Bile Ducts', 3)
  .addLabel('Lymphoid Aggregate', 4)
  .addLabel('Tissue Fold', 5)
  .addLabel('Background', 6)
  1. The order of annotating the regions doesn’t really matter since I’m selecting the export order here, right?
  2. Is there a way to annotate smaller region inside a bigger region in QuPath? It seems that once we annotate a large area, we can’t annotate the substructures within it, so we proceeded with the smallest to largest strategy. But it complicates our work in case we need to re-annotate a structure.

Thank you!

No – pixels only take over if multichannelOutput(false). With multichannel output, pixels with value 255 can overlap.

Mostly yes – the order in which you annotate doesn’t matter, it is only the export order that really matters. However, if I use multichannelOutput(false) then the order can help me determine which annotations I need to make more carefully. See the tip at the end of

Yes, see and


I see, thanks @petebankhead

So when I open the tif export file in Python and inspect the mask values in each channel, I see that most of the values are either 0 or 255, but oddly there are mask values in between?
[ 0 4 12 20 28 32 36 56 60 84 95 96 100 103 115 120 135 139 151 155 159 171 195 199 219 223 227 235 243 251 255]
Why is that? Note that I don’t see any difference between the image generated for this mask after discretizing all of the non zero values to 255 (top) or alternatively considering only the 255 values as positive and therefore assigning all of the 0<values<255 to 0 (bottom).
Why is that? I just need to use the pixels annotated by the annotators so if the software potentially adds a rim to the edges then I should exclude that.

Thank you

I would expect values other than 0 and 255 to occur if

  • compression is applied (e.g. the masks are saved as JPEGs at some point), or
  • the masks are resized with interpolation

I don’t know enough about your exact steps to know if this is the explanation and, if so, exactly where this happens.

Assuming you export the masks as TIFF, I would suggest opening these TIFF images in ImageJ and checking the values there. If they are not all 0 and 255 something must be going wrong in the export; if they are, then my expectation is that it is interpolation happening within Python.

1 Like

I found that it’s indeed the interpolation. Thanks!

1 Like