Exporting detections within selected annotation

I am trying to create a binary mask using the detections on Qupath. However, since the wholeslide image is too big, I was wondering if there’s a way to only export the detections and convert them to binary within the selected annotation. I have read Pete’s blog post (https://petebankhead.github.io/qupath/scripting/2018/03/13/script-export-import-binary-masks.html) but this doesn’t work since it’s exporting the entire annotation while I need the detections exported. Is there a way to do this? The alternate way I’ve thought about is to export the region first to imageJ, save the new image, then send overlay to Qupath, but every time I do this the detections’ coordinates change, not matching the image file.

Could you explain more about your downstream process (what are you doing with the masks when you export them, would JSON coordinates be better, etc.)?

Have you taken a look at the documentation? https://qupath.readthedocs.io/en/latest/docs/advanced/exporting_annotations.html#individual-annotations

You might try to see if the labelserver will pick up detections as well. Note the loop at the bottom exports “per annotation,” so if you size your annotations appropriately you should be able to complete the export. Getting things back into QuPath I am less sure about.

Not sure what this part means.

Thanks for the suggestions, I’ll play around with labelserver!
What I’m exactly trying to do is to create a training set (binary mask with the original image). I can successfully export all the detections as a downsampled binary wholeslide image, but I also need my original image exported as a .png file which is not working since it’s too big (currently it’s .svs format). This is why I thought selecting a specific region to export would be a good idea. Is there a better way to do this?

1 Like

It isn’t really a better way, but if you have a tissue annotation or similar, you could break that into annotation tiles in the Tile&superpixels menu, and then export each of those using the labelserver. If it is a pixel or object classifier, I’m not sure how it would handle border-touching cases, so bigger tiles are probably better, to the point where your process can handle them easily.

1 Like

You may also need to convert all of your cells into annotations, then cycle through your unclassified annotations for export. I’m not sure.

Or you might try this, though it is for an older version and might not work. It would also need heavy modifications to work on a per annotation bases, and include multiple labels.

Yeah, best way I found was to adjust the for loop to:
for (annotation in getAnnotationObjects().findAll{it.getPathClass() == null}) {
since my annotations were unclassified (may vary), and used another script to make the objects I was interested in into classified annotations.
It also needs some alteration, but it is fairly straightforward when reading the description.

1 Like

Thank you so much, this is really helpful! I’ll try again with these suggestions and see if it works!

1 Like

Sorry to bother you with so many questions, but I keep getting the error when I run the code to convert detections to annotations:

No such property: PathObjects for class: Script27

The error occurs in line 12 of the code(toChange.each line):

toChange = getDetectionObjects().findAll {it.getDisplayedName().equalsIgnoreCase("Not inflammation")}
newObjects = []
    roi = it.getROI()
    annotation = PathObjects.createAnnotationObject(roi, it.getPathClass())

// Actually add the objects
//Comment this line out if you want to keep the original objects

  1. What version of QuPath are you using, and is your Run-Include default imports checked? This was assuming at least 0.2.0. How are you running the script?

  2. Thinking about it a little more, I hope you aren’t training a deep learning classifier based off of QuPath’s cell detection, as you will be training any inaccuracies from the QuPath outlines into the DL model. If that is what you are doing. Weren’t many details, but I wanted to make sure.

I am using version 0.1.2, and I’m running the code on script editor on Qupath. I have Run-Include default checked.

I’m aware that Qupath’s cell detection is inaccurate, but I was planning on training a DL model then do QC as it would take up too much time to do a manual annotation.

Oh, haha, none of that will be useful to you then. The Labelserver did not even exist in 0.1.2. Sorry, one thing you do want to include in any post is the version you are using!

I am not sure how to do what you want in 0.1.2, the label server is one of the improvements for 0.2.0.

Haha, I didn’t even realize I was using the older version, I’ll go ahead and update it! Thanks!

1 Like

@Jihyeon_Je in v0.2.0 you can use this one-line script to quickly see the options available when building a labeled image server:

println describe(LabeledImageServer.Builder)

In addition to useAnnotations() (the default), it includes options useCellNuclei(), useCells() and useDetections() to help enable the export of masks for other kinds of object.


Thanks @Research_Associate and @petebankhead for looking into this! It has been immensely helpful!!
I got it to work by slightly modifying Pete’s exporting labeled image code
I basically changed it to create binary masks instead of 8-bit and to only extract the specific detection class I was interested in. I think my only concern now is getting rid of images that do not contain any tissues since it takes up a lot of memory. And again, thanks so much for all your help!

This is the code I used just in case someone else is interested in doing something similar:

 * Script to export pixels & annotations for whole slide images.
 * The image can optionally be tiled during export, so that even large images can be exported at high resolution.
 * (Note: In this case 'tiled' means as separate, non-overlapping images... not a single, tiled pyramidal image.)
 * The downsample value and coordinates are encoded in each image file name.
 * The annotations are exported as 8-bit labelled images.
 * These labels depend upon annotation classifications; a text file giving the key is written for reference.
 * The labelled image can also optionally use indexed colors to depict the colors of the
 * original classifications within QuPath for easier visualization & comparison.
 * @author Pete Bankhead

import qupath.lib.common.ColorTools
import qupath.lib.objects.classes.PathClass
import qupath.lib.regions.RegionRequest
import qupath.lib.roi.PathROIToolsAwt
import qupath.lib.scripting.QPEx

import javax.imageio.ImageIO
import java.awt.Color
import java.awt.image.BufferedImage
import java.awt.image.DataBufferByte
import java.awt.image.IndexColorModel

// Requested pixel size - used to define output resolution
// Set <= 0 to use the full resolution (whatever that may be)
// (But be careful with this - it could take a long time to run!)
double requestedPixelSizeMicrons = 1.0

// Maximum size of an image tile when exporting
int maxTileSize = 3000

// Export the original pixels (assumed ot be RGB) for each tile
boolean exportOriginalPixels = true

// Export a labelled image for each tile containing annotations
boolean exportAnnotationLabelledImage = true

// NOTE: The following parameters only matter if exportAnnotationLabelledImage == true
// Ignore annotations that don't have a classification set
boolean skipUnclassifiedAnnotations = true
// Skip tiles without annotations (only applies to label exports - all image tiles will be written)
boolean skipUnannotatedTiles = true
// Create an 8-bit indexed image
// This is very useful for display/previewing - although need to be careful when importing into other software,
// which can prefer to replaces labels with the RGB colors they refer to

// NOTE: The following parameter only matters if exportOriginalPixels == true
// Define the format for the export image
def imageFormat = 'PNG'

// Output directory for storing the tiles
def pathOutput = QPEx.buildFilePath(QPEx.PROJECT_BASE_DIR, 'exported_tiles')


// Get the main QuPath data structures
def imageData = QPEx.getCurrentImageData()
def hierarchy = imageData.getHierarchy()
def server = imageData.getServer()

// Get the annotations that have ROIs & are have classifications (if required)
def annotations = getDetectionObjects().findAll {it.getDisplayedName().equalsIgnoreCase("Not inflammation")}

// Get all the represented classifications
def pathClasses = annotations.collect({it.getPathClass()}) as Set

// We can't handle more than 255 classes (because of 8-bit representation)
if (pathClasses.size() > 255) {
    print 'Sorry! Cannot handle > 255 classications - number here is ' + pathClasses.size()

// Check if we've anything to do
if (!exportAnnotationLabelledImage && !exportOriginalPixels) {
    print 'Nothing to export!'

// Calculate the downsample value
double downsample = 1
if (requestedPixelSizeMicrons > 0)
    downsample = requestedPixelSizeMicrons / server.getAveragedPixelSizeMicrons()
// Calculate the tile spacing in full resolution pixels
int spacing = (int)(maxTileSize * downsample)

// Create the RegionRequests
def requests = new ArrayList<RegionRequest>()
for (int y = 0; y < server.getHeight(); y += spacing) {
    int h = spacing
    if (y + h > server.getHeight())
        h = server.getHeight() - y
    for (int x = 0; x < server.getWidth(); x += spacing) {
        int w = spacing
        if (x + w > server.getWidth())
            w = server.getWidth() - x
        requests << RegionRequest.createInstance(server.getPath(), downsample, x, y, w, h)

// Write the label 'key'

// Handle the requests in parallel
requests.parallelStream().forEach { request ->
    // Create a suitable base image name
    String name = String.format('%s_(%.2f,%d,%d,%d,%d)',

    // Export the raw image pixels if necessary
    // If we do this, store the width & height - to make sure we have an exact match
    int width = -1
    int height = -1
    if (exportOriginalPixels) {
        def img = server.readBufferedImage(request)
        width = img.getWidth()
        height = img.getHeight()
        def fileOutput = new File(pathOutput, name + '.' + imageFormat.toLowerCase())
        ImageIO.write(img, imageFormat, fileOutput)

    // Export the labelled tiles if necessary
    if (exportAnnotationLabelledImage) {
        // Calculate dimensions if we don't know them already
        if (width < 0 || height < 0) {
            width = Math.round(request.getWidth() / downsample)
            height = Math.round(request.getHeight() / downsample)
        // Fill the annotations with the appropriate label
        def imgMask = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY)
        def g2d = imgMask.createGraphics()
        g2d.setClip(0, 0, width, height)
        g2d.scale(1.0/downsample, 1.0/downsample)
        g2d.translate(-request.getX(), -request.getY())
        int count = 0
        for (annotation in annotations) {
            def roi = annotation.getROI()
            if (!request.intersects(roi.getBoundsX(), roi.getBoundsY(), roi.getBoundsWidth(), roi.getBoundsHeight()))
            def shape = PathROIToolsAwt.getShape(roi)
        if (count > 0 || !skipUnannotatedTiles) {
            // Extract the bytes from the image
            def buf = imgMask.getRaster().getDataBuffer() as DataBufferByte
            def bytes = buf.getData()
            // Check if we actually have any non-zero pixels, if necessary -
            // we might not if the annotation bounding box intersected the region, but the annotation itself does not
            if (skipUnannotatedTiles && !bytes.any { it != (byte)0 })
            // Write the mask
            def fileOutput = new File(pathOutput, name + '-labels.png')
            ImageIO.write(imgMask, 'PNG', fileOutput)
print 'Done!'```