Tiling whole slide images with a little overlap at borders

Hi everyone,

I would like to tile a whole slide image automatically and save the tiles as tiff images by coordinates but I would like to do this with some overlap between the tiles like 10% (I will stitch them back after some process on the tiles. This process will crop borders of the tiles a little bit and that’s why I need overlapping borders).

Pete provided a script for that but it is for nonoverlapping tiles. Is it possible to make it overlapping? (Thanks to Pete for this script! it is perfectly useful!)

Any help would be appreciated!

/**
 * Script to export pixels & annotations for whole slide images.
 *
 * The image can optionally be tiled during export, so that even large images can be exported at high resolution.
 * (Note: In this case 'tiled' means as separate, non-overlapping images... not a single, tiled pyramidal image.)
 *
 * The downsample value and coordinates are encoded in each image file name.
 *
 * The annotations are exported as 8-bit labelled images.
 * These labels depend upon annotation classifications; a text file giving the key is written for reference.
 *
 * The labelled image can also optionally use indexed colors to depict the colors of the
 * original classifications within QuPath for easier visualization & comparison.
 *
 * @author Pete Bankhead
 */


import qupath.lib.common.ColorTools
import qupath.lib.objects.classes.PathClass
import qupath.lib.regions.RegionRequest
import qupath.lib.roi.PathROIToolsAwt
import qupath.lib.scripting.QPEx

import javax.imageio.ImageIO
import java.awt.Color
import java.awt.image.BufferedImage
import java.awt.image.DataBufferByte
import java.awt.image.IndexColorModel

// Requested pixel size - used to define output resolution
// Set <= 0 to use the full resolution (whatever that may be)
// (But be careful with this - it could take a long time to run!)
double requestedPixelSizeMicrons = 0.0

// Maximum size of an image tile when exporting
int maxTileSize = 2000

// Export the original pixels (assumed ot be RGB) for each tile
boolean exportOriginalPixels = true

// Export a labelled image for each tile containing annotations
boolean exportAnnotationLabelledImage = true

// NOTE: The following parameters only matter if exportAnnotationLabelledImage == true
// Ignore annotations that don't have a classification set
boolean skipUnclassifiedAnnotations = true
// Skip tiles without annotations (only applies to label exports - all image tiles will be written)
boolean skipUnannotatedTiles = true
// Create an 8-bit indexed image
// This is very useful for display/previewing - although need to be careful when importing into other software,
// which can prefer to replaces labels with the RGB colors they refer to
boolean createIndexedImageLabels = true

// NOTE: The following parameter only matters if exportOriginalPixels == true
// Define the format for the export image
def imageFormat = 'JPG'

// Output directory for storing the tiles
def pathOutput = QPEx.buildFilePath(QPEx.PROJECT_BASE_DIR, 'exported_tiles')
QPEx.mkdirs(pathOutput)

//------------------------------------------------------------------------------------

// Get the main QuPath data structures
def imageData = QPEx.getCurrentImageData()
def hierarchy = imageData.getHierarchy()
def server = imageData.getServer()

// Get the annotations that have ROIs & are have classifications (if required)
def annotations = hierarchy.getFlattenedObjectList(null).findAll {
    it.isAnnotation() && it.hasROI() && (!skipUnclassifiedAnnotations || it.getPathClass() != null) }

// Get all the represented classifications
def pathClasses = annotations.collect({it.getPathClass()}) as Set

// We can't handle more than 255 classes (because of 8-bit representation)
if (pathClasses.size() > 255) {
    print 'Sorry! Cannot handle > 255 classications - number here is ' + pathClasses.size()
    return
}

// For each classification, create an integer label >= 1 & cache a color for drawing it
// Also, create a LUT for visualizing more easily
def labelKey = new StringBuilder()
def pathClassColors = [:]
int n = pathClasses.size() + 1
def r = ([(byte)0] * n) as byte[]
def g = r.clone()
def b = r.clone()
def a = r.clone()
pathClasses.eachWithIndex{ PathClass entry, int i ->
    // Record integer label for key
    int label = i+1
    String name = entry == null ? 'None' : entry.toString()
    labelKey << name << '\t' << label << System.lineSeparator()
    // Store Color based on label - needed for painting with Graphics2D
    pathClassColors.put(entry, new Color(label, label, label))
    // Update RGB values for IndexColorModel
    // Use gray as the default color indicating no classification
    int rgb = entry == null ? ColorTools.makeRGB(127, 127, 127) : entry.getColor()
    r[label] = ColorTools.red(rgb)
    g[label] = ColorTools.green(rgb)
    b[label] = ColorTools.blue(rgb)
    a[label] = 255
}

// Check if we've anything to do
if (!exportAnnotationLabelledImage && !exportOriginalPixels) {
    print 'Nothing to export!'
    return
}

// Calculate the downsample value
double downsample = 1
if (requestedPixelSizeMicrons > 0)
    downsample = requestedPixelSizeMicrons / server.getAveragedPixelSizeMicrons()
    
// Calculate the tile spacing in full resolution pixels
int spacing = (int)(maxTileSize * downsample)

// Create the RegionRequests
def requests = new ArrayList<RegionRequest>()
for (int y = 0; y < server.getHeight(); y += spacing) {
    int h = spacing
    if (y + h > server.getHeight())
        h = server.getHeight() - y
    for (int x = 0; x < server.getWidth(); x += spacing) {
        int w = spacing
        if (x + w > server.getWidth())
            w = server.getWidth() - x
        requests << RegionRequest.createInstance(server.getPath(), downsample, x, y, w, h)
    }
}

// Write the label 'key'
println labelKey
def keyName = String.format('%s_(downsample=%.3f,tiles=%d)-key.txt', server.getShortServerName(), downsample, maxTileSize)
def fileLabels = new File(pathOutput, keyName)
fileLabels.text = labelKey.toString()

// Handle the requests in parallel
requests.parallelStream().forEach { request ->
    // Create a suitable base image name
    String name = String.format('%s_(%.2f,%d,%d,%d,%d)',
            server.getShortServerName(),
            request.getDownsample(),
            request.getX(),
            request.getY(),
            request.getWidth(),
            request.getHeight()
    )

    // Export the raw image pixels if necessary
    // If we do this, store the width & height - to make sure we have an exact match
    int width = -1
    int height = -1
    if (exportOriginalPixels) {
        def img = server.readBufferedImage(request)
        width = img.getWidth()
        height = img.getHeight()
        def fileOutput = new File(pathOutput, name + '.' + imageFormat.toLowerCase())
        ImageIO.write(img, imageFormat, fileOutput)
    }

    // Export the labelled tiles if necessary
    if (exportAnnotationLabelledImage) {
        // Calculate dimensions if we don't know them already
        if (width < 0 || height < 0) {
            width = Math.round(request.getWidth() / downsample)
            height = Math.round(request.getHeight() / downsample)
        }
        // Fill the annotations with the appropriate label
        def imgMask = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY)
        def g2d = imgMask.createGraphics()
        g2d.setClip(0, 0, width, height)
        g2d.scale(1.0/downsample, 1.0/downsample)
        g2d.translate(-request.getX(), -request.getY())
        int count = 0
        for (annotation in annotations) {
            def roi = annotation.getROI()
            if (!request.intersects(roi.getBoundsX(), roi.getBoundsY(), roi.getBoundsWidth(), roi.getBoundsHeight()))
                continue
            def shape = PathROIToolsAwt.getShape(roi)
            def color = pathClassColors.get(annotation.getPathClass())
            g2d.setColor(color)
            g2d.fill(shape)
            count++
        }
        g2d.dispose()
        if (count > 0 || !skipUnannotatedTiles) {
            // Extract the bytes from the image
            def buf = imgMask.getRaster().getDataBuffer() as DataBufferByte
            def bytes = buf.getData()
            // Check if we actually have any non-zero pixels, if necessary -
            // we might not if the annotation bounding box intersected the region, but the annotation itself does not
            if (skipUnannotatedTiles && !bytes.any { it != (byte)0 })
                return
            // If we want an indexed color image, create one using the data buffer & LUT
            if (createIndexedImageLabels) {
                def colorModel = new IndexColorModel(8, n, r, g, b, a)
                def imgMaskColor = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_INDEXED, colorModel)
                System.arraycopy(bytes, 0, imgMaskColor.getRaster().getDataBuffer().getData(), 0, width*height)
                imgMask = imgMaskColor
            }
            // Write the mask
            def fileOutput = new File(pathOutput, name + '-labels.png')
            ImageIO.write(imgMask, 'PNG', fileOutput)
        }
    }
}
print 'Done!'

I haven’t tried this as I don’t do a lot of exporting, but have you tried multiplying that tile size by 1.05, or 5% less spacing on each tile.
I guess it would end up
int spacing = (int)(maxTileSize * downsample*1.05)

No guarantee that is right, just kind of eyeballing the code.

Also eyeballing the code because I don’t remember writing it well enough :slight_smile:

The ‘easiest’ thing might be to adjust

requests << RegionRequest.createInstance(server.getPath(), downsample, x, y, w, h)

to become something like

int pad = 100
requests << RegionRequest.createInstance(server.getPath(), downsample, x-pad, y-pad, w+pad*2, h+pad*2)

where pad is some value you choose.

The trouble is that I think other changes will be needed to account for the fact that at the boundaries the coordinates will be out of range.

With the caveat that I have not checked this even once so it is presumably wrong somehow this is my suggestion for how to change the loop:

int pad = 100
for (int y = pad; y < server.getHeight()-pad; y += spacing) {
    int h = spacing
    if (y + h > server.getHeight()-pad)
        h = server.getHeight() - pad - y
    for (int x = pad; x < server.getWidth()-pad; x += spacing) {
        int w = spacing
        if (x + w > server.getWidth()-pad)
            w = server.getWidth() - pad - x
        requests << RegionRequest.createInstance(server.getPath(), downsample, x-pad, y-pad, w+pad*2, h+pad*2)
    }
}
2 Likes

I’ll check both suggestions. Really appreciated!

Pete, your original script works perfectly for whole slide area. I just couldn’t manage a work around to run this code for a large annotation. I mean tiling a large annotation.

Thanks again! I’ll get back after testing them.

Derp. And I edited my previous code since… if you want overlaps, you actually want to make the tiles larger :frowning:

Pete’s edit on overlapping. It works nice on X axis and I assume it works well also on Y axis for the overlap.
Red zone is overlapped!

I’ll test Research Associate’s suggestion as well.

1 Like

Oh no, if his version works, just stick with it :slight_smile:

We perform MICSSS (consecutive staining based multiplex IHC method) and stain a single tissue with multiple markers and generate multiple images based on the same tissue.
My biggest problem is whole slide image registration for generating data for coexpressed markers.
Whole slide image sometimes contains around 3 million cells and aligning all cells perfectly for multiple markers is extremely difficult.

I thought Pete’s script could help to split the images (based on the same tissue) into tiles and then I could register the tiles individually but it creates cropped zones at the edges due to registration process. This causes black zones when they are stitched back in order to generate the whole slide image back.

Whole purpose is generating ‘whole slide level’ multiplex IHC data.

I will try this workflow with Pete’s amazing script again.

As a last question; is it possible to modify this script just to tile an annotation area instead of whole slide?

I feel like that exact question came up somewhere before, but I can’t remember where. I have a feeling that if the overlay export can be limited to within the tiles, then the tile export could be as well…

Oh, right, it is discussed in the comments on the blog page that the script comes from. The boolean check for the true/false statements you set at the beginning of the script needs to be moved to an earlier point in the script. I forget exactly where.

And more discussion here: Exporting labelled images after tiling a specific annotations

I also think that the overall process is going to be very difficult unless you can pre-register the two images so that they are close enough before registering the tiles!

That process was the entire reason we ended up getting Visiopharm, the plastic registration is pretty great. The rest of the program is… a bit rough.

I confirm that Y axis also works with Pete’s script.

That discussion is exactly the same thing I need with a minor difference (mine is with overlapping borders). Thanks for sharing!!

12

We are also purchasing Visiopharm just because of this problem. It is in the process of finance approval :smile:

Ah, well, be aware that there is currently no way to get the resulting image OUT of Visiopharm. That was… troubling to some of the people here who didn’t realize that ahead of time.
*and frustrating to me personally since 99% of things I would rather do in QuPath.

We invited them maybe 6-7 times to test it because they were not willing to provide free trial. Everything is in data format and data layers, we are well aware but thanks for the warning.
All we need is whole slide level multiplex coexpression data and nearest neighbor analysis for immune cell and tumor cell/zone distances. Maybe more in the future but for now it is that simple.

I realize this is straying off topic but…
Have they shown a way to get what you have working with Phenomap? [*oh wait, I saw how to get the nearest neighbor thing working in the App, I just forgot about it. Looked… I’m not sure fun is the right word] Last I saw it didn’t work with a series of brightfield images since it wasn’t multiplex (as in multichannel). It made classification a bit frustrating. Ooh, I never did try the AI tool on registered images. I wonder…

Coexpression is fine since you can align all of the images very well. I guess the nearest neighbor part can be handled externally as well if nothing else. One of the scripts I still hope to write for QuPath.

Odd that they wouldn’t give you a free trial. Have had at least 30 days at both places I worked. Unless you essentially had 6-7 free trials :slight_smile:

We have Image Cytometry tools like ViSNE and clustering tools that our computational scientists here developed (clustergrammer). We will use them for the multiplex data.

We also have multiplex ion beam imaging technology and Phenomap will mainly be used for that application and their demonstration was good enough.

They told us nearest neighbor works but they have never done a demonstration here. I hope it works :smile:

I hope we can have a QuPath script from you. It would be awesome.

They said their company policy is coming to institutes and running visiopharm together with them on their computer. However their time is limited and it is not possible to have a good sense without having the trial. They have never provided a free trial even once. It is good to know that you got it two times!

1 Like

Oh, I have no doubt the nearest neighbor thing works, I just vaguely remember it looked awkward to implement or something. A lot of it looks awkward to implement. The calculations section within the App makes me cringe.

Ah, Phenomap would definitely be greate for the MIBI and CyTOF stuff. Only played with free data sets there though.

One of the free trials was a few years ago, and the other was after some indication was made that we would “probably buy the software or were significantly interested.” Or something. Pretty sure there was legal jargon involved.

That script is very useful for tiling an annotation and it works perfectly but there is no overlapping border function.
Is it possible to modify that script for overlapping borders like Pete’s script?
Another way would be modifying Pete’s script to tile the annotated area instead of whole image?
Any help would be appreciated!

Unfortunately I just had a bunch of projects dropped on me, so I don’t have time to look at any of the code. My cheating work around, if you are using Daniel’s script, would be to use the Expand Annotations function in the Objects menu to expand all of your annotation tiles by the desired amount, which can be scripted if needed. Then export. Not a fan of it being JPG though, you might want to try to work around that using some of the other image writing scripts that export as TIFF.

I got lost & don’t know what ‘that script’ refers to. Also not sure what exactly is desired? Tiling an annotation might be just restricting the tiles to the bounding box, or it might require cropped the overlaps of any tiles that are outside.