Qupath 0.2.0m11 - Updated "Import masks as annotations" script


Although the mask exporter script had been previously updated to work in 0.2.0, the mask importer script is still written for 0.1.2 and thus breaks down at multiple points in 0.2.0.

There are several broad explanations available in this forum regarding what should be changed to update its functionality, but it still took me longer than expected to iron out some quirks until there were no problems popping up.

Thus, even if these changes are trivial for Java/Groovy-inclined image analysts to implement, I’m posting a 0.2.0m11-compatible script below in hopes that it will save other fellow biologists some time :slight_smile:


 * Script to import binary masks & create annotations, adding them to the current object hierarchy.
 * It is assumed that each mask is stored in a PNG file in a project subdirectory called 'masks'.
 * Each file name should be of the form:
 *   [Short original image name]_[Classification name]_([downsample],[x],[y],[width],[height])-mask.png
 * Note: It's assumed that the classification is a simple name without underscores, i.e. not a 'derived' classification
 * (so 'Tumor' is ok, but 'Tumor: Positive' is not)
 * The x, y, width & height values should be in terms of coordinates for the full-resolution image.
 * By default, the image name stored in the mask filename has to match that of the current image - but this check can be turned off.
 * @author Pete Bankhead

import ij.measure.Calibration
import ij.plugin.filter.ThresholdToSelection
import ij.process.ByteProcessor
import ij.process.ImageProcessor
import qupath.imagej.tools.IJTools
import qupath.lib.objects.PathAnnotationObject
import qupath.lib.objects.classes.PathClassFactory
import static qupath.lib.gui.scripting.QPEx.*

import javax.imageio.ImageIO
import qupath.lib.regions.ImagePlane
import qupath.lib.roi.ROIs
import qupath.lib.objects.PathObjects

// Get the main QuPath data structures
def imageData = QPEx.getCurrentImageData()
def hierarchy = imageData.getHierarchy()
def server = getCurrentServer()

// Only parse files that contain the specified text; set to '' if all files should be included
// (This is used to avoid adding masks intended for a different image)
def includeText = server.getMetadata().getName()

// Get a list of image files, stopping early if none can be found
def pathOutput = QPEx.buildFilePath(QPEx.PROJECT_BASE_DIR, 'masks')
def dirOutput = new File(pathOutput)
if (!dirOutput.isDirectory()) {
    print dirOutput + ' is not a valid directory!'
def files = dirOutput.listFiles({f -> f.isFile() && f.getName().contains(includeText) && f.getName().endsWith('-mask.png') } as FileFilter) as List
if (files.isEmpty()) {
    print 'No mask files found in ' + dirOutput

// Create annotations for all the files
def annotations = []
files.each {
    try {
        annotations << parseAnnotation(it)
    } catch (Exception e) {
        print 'Unable to parse annotation from ' + it.getName() + ': ' + e.getLocalizedMessage()

// Add annotations to image

 * Create a new annotation from a binary image, parsing the classification & region from the file name.
 * Note: this code doesn't bother with error checking or handling potential issues with formatting/blank images.
 * If something is not quite right, it is quite likely to throw an exception.
 * @param file File containing the PNG image mask.  The image name must be formatted as above.
 * @return The PathAnnotationObject created based on the mask & file name contents.
def parseAnnotation(File file) {
    // Read the image
    def img = ImageIO.read(file)

    // Split the file name into parts: [Image name, Classification, Region]
    def parts = file.getName().replace('-mask.png', '').split('_')

    // Discard all but the last 2 parts - it's possible that the original name contained underscores,
    // so better to work from the end of the list and not the start
    def classificationString = parts[-2]

    // Extract region, and trim off parentheses (admittedly in a lazy way...)
    def regionString = parts[-1].replace('(', '').replace(')', '')

    // Create a classification, if necessary
    def pathClass = null
    if (classificationString != 'None')
        pathClass = PathClassFactory.getPathClass(classificationString)

    // Parse the x, y coordinates of the region - width & height not really needed
    // (but could potentially be used to estimate the downsample value, if we didn't already have it)
    def regionParts = regionString.split(',')
    double downsample = regionParts[0] as double
    int x = regionParts[1] as int
    int y = regionParts[2] as int

    // To create the ROI, travel into ImageJ
    def bp = new ByteProcessor(img)
    bp.setThreshold(127.5, Double.MAX_VALUE, ImageProcessor.NO_LUT_UPDATE)
    def roiIJ = new ThresholdToSelection().convert(bp)
    int z = 0
    int t = 0
    def plane = ImagePlane.getPlane(z, t)

    // Convert ImageJ ROI to a QuPath ROI
    // This assumes we have a single 2D image (no z-stack, time series)
    // Currently, we need to create an ImageJ Calibration object to store the origin
    // (this might be simplified in a later version)
    def cal = new Calibration()
    cal.xOrigin = -x/downsample
    cal.yOrigin = -y/downsample
    def roi = IJTools.convertToROI(roiIJ, cal, downsample,plane)
    // Create & return the object
    return new PathAnnotationObject(roi, pathClass)

@pedrolmoura I think you’re referring to these scripts from my blog?

I’ve been rather neglecting the blog lately, in favour of consolidating all the up-to-date things on readthedocs. There’s a ‘better’ (simpler/more flexible) way to export here, but no corresponding import yet. I hope to get around to that some day…

I wasn’t sure if import was being used much – so good to know it is useful, and to have this as an updated version, thanks!

1 Like

@petebankhead, Yes, those are the ones (and then I found a version of the export script that was updated to work in 0.2.0m6, which is fully functional in later milestones).

I found importing quite useful! (mostly from a quality control standpoint for changing analysis parameters, since annotating the regions I want to analyse and exporting those regions as masks let me then import the annotations back and changing things at will without ever worrying about losing them)


If you want to purely import and export objects in QuPath, you can do that using:

Can be adjusted for detections as well, and organized so that it can be run for project. Note that objects exported this way include all of their measurement information.

@Research_Associate I’d posted the method of serializing to a file before as a workaround, and it should continue to work, but I wouldn’t really recommend it in general. Specifically because of

The serialization method is very QuPath-specific, and if annotation definitions are changed in later versions compatibility may break. This probably won’t happen in the near future, but may very well happen at some point (perhaps years from now).

If you want to avoid storing a mask, GeoJSON is a better compromise, since it isn’t QuPath-specific.

But if the mask option is working, I see no reason to change it :slight_smile:

1 Like

Ah, I had preferred the serial object method as it gave me the option to save the objects at full resolution and with measurements. Maybe I should look into GeoJSON, but I haven’t needed to transfer annotations between applications… yet.

@Research_Associate The serialization method is quicker and preserves pretty much the same information that you’d get from storing a .qpdata file (since it uses serialization… which is the reason it wouldn’t be easy to change any time soon).

But it can also have issues with if/how parent/child objects are serialized along with the objects you might expect.

Basically… it belongs in the category of things that can sometimes be useful, but which I really wouldn’t want to recommend doing in general – because it’s a trick that can do more harm than good if applied in the wrong circumstances.

1 Like

Following this issue a little bit.

I have segmentation png mask with 4 different categories in it with 4 different gray values. It looks like this:

The mask and the WSI have the same width and height at max level. I tried to use the script from the first comment but I got an error: width*height > Integer.MAX_VALUE!.

I am not sure how to handle this since this is an internal value in Java that cannot be changed, right? Also, I am not sure if the above code could work on non-binary data.

I am aware that now the scripting changed a little bit so I was hoping a little bit of help in here. Also, I am not sure if I should (and how) transform the png to something like GeoJSON.

As you may see, I am completely beginner in Java and Groovy, so apologize for the probably naive questions.

Hi @Joan_Gibert_Fernande, I’m not really clear on what you want to achieve here, how/where you generated such a large PNG, or when precisely the error occurred. Can you describe all this in a bit more detail?

Java can’t have an array longer than Integer.MAX_VALUE, and most image representations include sticking all the pixels in an array, so that imposes a limit that needs to somehow be avoided.

1 Like

Sorry for the lack of information.

I have a WSI I want to annotate. Something like this one which is 56223x72039px:

From it, I generated tiles and I used an algorithm which detects cell nuclei and classify it between Tumor cells, Connective cells, Lymphocytes and Death cells. I generate png masks of every tile I predicted similar to the one I attached in the previous post.

From this tiles, I have the coordinates encoded in the filename so I built an script that took those coordinates and generates an image with the tiles “pasted” in the correct place. Something like:

I used this image just to make things as clear as possible, I am not generating tiles from the background so the real image is a little bit more patchy. In any case, this image has the exact same dimensions than the WSI, so I was hoping to import this image as a annotation in QuPath.

For this, I tried the script shared in the first post of this thread, trying to fulfill the requirements for make it work: Mask name in the directory masks with the following structure: [Short original image name]_[Classification name]_([downsample],[x],[y],[width],[height])-mask.png
In this case, I used ‘Tumor’ as Classification name just to check how it goes. However, when trying to run it in the QuPath terminal, I just got the error I mentioned above. The exact error code is:

INFO: file.bif
INFO: [/mnt/isilon/try/masks/file.bif_Tumor_(0,0,0,56223,72039)-mask.png]
INFO: Unable to parse annotation from file.bif_Tumor_(0,0,0,56223,72039)-mask.png: width*height > Integer.MAX_VALUE!
INFO: Result: false

Also, I have more questions on how to manage this annotation masks. Should I transform them to other formats like GeoJSON?

Hope now is more clear. Thanks for the great feedback!

Thanks, QuPath needs a good, general way to handle this scenario; it doesn’t have one yet, and I’m not sure when I’ll get time to work on it.

In the short term, I just have general suggestions that hopefully could help…

  • I don’t think assembling the image as a PNG makes it any easier, unless you can convert that PNG into a pyramidal TIFF. libvips might help with that (I’m not certain). If it does, then you could use the pixel classification/thresholding in QuPath to create your objects. Then you just need to transfer those objects to the right image (there are some hack-y ways to do that via Java serialization).

  • The alternative is to create objects one tile at a time, using an approach similar to the script in the first post. In that case, it sounds like all you need to do is translate your ROI… which should be quite easy. Inserting a single line could do that, e.g.

def roi = IJTools.convertToROI(roiIJ, cal, downsample,plane)
roi = roi.translate(100, 200) // New line, use the coordinates from your image name
// Create & return the object
return PathObjects.createAnnotationObject(roi, pathClass) // Preferred to using the constructor

Any affine transform could be applied to the ROI with a bit more effort (e.g. via this).

I think the second approach is likely to be easier, but the first approach may be better if you need to handle objects that overlap tile boundaries.


Thanks for the feedback, I am trying to perform both. I will come back with news as soon as I have something.

Looking at the tiles, it seems that there is kind of discondance between the x y coordinates in the file name (pixel position) and the ones QuPath expects. Are they in micrometers?

Regarding the annotations, since the images have 4 different values, is there an easy way to handle this in the first script?


I’m not sure what precisely you’re referring to, but the answer is probably ‘no’ - QuPath uses pixel coordinates for almost everything, and converts to micrometers only for certain measurements or display in the UI.

I’m not sure what first script you’re referring to… there are some methods to convert ImageJ labeled images to ROIs, e.g. at https://github.com/qupath/qupath/blob/a03756328188999c0b7f12c290cda0589c50bd4b/qupath-core-processing/src/main/java/qupath/imagej/processing/RoiLabeling.java#L310

I tend to prefer ImageJ’s contour tracing for the reasons outlined at https://petebankhead.github.io/qupath/technical/2018/03/13/note-on-contours.html

However the methods in RoiLabeling have proved unreliable for tiles images at any arbitrary resolution, due to rounding errors in the definition of the region bounding box. This results in small – but still problematic – misalignments.

Therefore the pixel classifier creates objects using methods defined in PixelClassifierTools.
In principle, these could be used independently of the pixel classifier – but precisely how would be quite application dependent… at least until I get around to writing a proper ‘standard’ way to do it.

Sorry, I was referring to the first post of this thread where the script uses part of the file name to give the annotation a value (in this case “Tumor”). However, it only accepts one label and I was wondering how difficult would be to accept more than one (in my case, 4).

I will take a look at the links you shared but I guess I have to:

  1. Merge all predicted image tiles in a single png file
  2. Convert them to piramidal as commented here:
  1. Transform them to ROI
  2. Upload it to the WSI as annotation

I am not quite sure what is happening. At first I thought that maybe was due to bif format since it’s not fully supported. However, I also tried with svs files and I am still having issues. Here is a screenshot:

I tried to upload the annotated tile and tried to paste in the origin using roi.translate(0,0). However, rather than be placed in the top left (i guess it should go there, right?) it is far bellow this point. Both extensions I give me similar results

Maybe I missunderstood how roi.translate works?


roi.translate(double dx, double dy) takes displacement values, not absolute coordinates. So it will depend on the x,y coordinates of the initial ROI bounding box.

1 Like

Thanks, now everything is clearer. I end up changing

// To create the ROI, travel into ImageJ
    def bp = new ByteProcessor(img)
    bp.setThreshold(1, 255, ImageProcessor.NO_LUT_UPDATE)
    def roiIJ = new ThresholdToSelection().convert(bp)

In order to use the different pixel values to generate the 4 different masks. I am sure it’s not the best way but at this point it works with tiles quite nicely. With this, I have 4 different scripts with different annotation value and different setThreshold (in my specific case I have values at 63.75, 85, 127.5 and 255).

However, in one of the measures (when setThreshold=(1, 255, ImageProcessor.NO_LUT_UPDATE)), I got a square around the annotation tile, like this:

I generate the images with PIL and does not seem that it generates any black square border around the image but there they are. Any ideas why is this happening?

I can’t answer that as I don’t really know what is the content of the image being thresholded, or the expected output. It looks like the annotations you get may be the inverse of the desired annotation… so changing the threshold might be the solution.

If your original image uses indexed colors, it can be a bit of a fight to get these correctly into Java without having converted to RGB along the way. But it really depends upon the details of how the image is written and read.