Batch Image Alignment


I’m looking to develop a set of scripts to be used for batch image alignment. Currently, we have a set of 50 tumours, each with 3 serial sections taken (panel A-C). Each section has been imaged for 4 unique markers (channel 1-4). I’d like to align and concatenate each panel and write out a 12-channel image stack. While QuPath-Concatenate channels.groovy does exactly that, image names and transform matrices (tforms) have to be manually entered into the script. Furthermore, I’d like to separate the tform calculation step from applying and writing out the image, such that the latter can be run fully automatically, without a user having to edit the script to specify which image in the project should be appended and using which tform. Currently, I’m proposing the following:

  • Prior to alignment, create a csv file (externally, in Excel). 1st column contains the static (reference) image name, 2nd column contains the names of the first set of images to transform (moving), 3rd contains the second set, etc.
  • Load in all images from all serial sections (panels) into one QuPath project. Use the built in “interactive image alignment” to get a good alignment. To save, run a script that writes out a file with the tform matrix, named according to the moving image name, similarly to QuPath-Concatenate channels.groovy except with the affine file name corresponding to the moving image
  • Once all tforms have been written, run a script for every image in the project. If the current image name is a static image, load in the tform of the first (or subsequent) moving images. Apply the transform, concatenate onto the static image, and write. This step is fully automated.

Does anyone have experience with batch image registration, and if so, how did you go about doing it?

Also, how would I go about obtaining the aligned image name from the “interactive image alignment” GUI to use in QuPath-Concatenate channels.groovy ?

Thank you in advance for your time,


Well, two things right off.

  1. This probably won’t be great for sequential slices, if they are your standard 4 micron slices, but hopefully you realize that and have set your expectations accordingly. If this is somehow more like cryo-EM slices, then right on!
  2. Have you looked at the option already available?
    QuPath- Multiple Image Alignment and Object Transfer
    Calling Image Alignment function from a script
    which is also referenced in the first post.
1 Like

These are serial immunofluorescence sections 4 microns thick. On one panel, we have a marker for tumour oxygenation, and on the second panel, we have a cell specific marker. Spatial oxygen distributions don’t change much across the serial sections, which is why we can get away with measuring colocalization of those markers across serial sections. For markers that are specific to individual cells (e.g. immune or proliferation), those are included on the same panel

I’ll give it a go, Calling Image Alignment function from a script from the post above does both batch tform calculation (intensity-based) and transforming of images. There’s a lot of variation between stain intensity across different panels, which is why I’d like to have the option of interactive alignment (and also as a QC step). I’ll try it out and see if it’s an issue, may not be.

1 Like


You might consider a more generalizable pixel classifier to create an annotation, then use that annotation for the alignment instead of the pure intensity. @smcardle 's most recent post includes the ability to do that.

1 Like

Hi, since I made my last post, I’ve updated the batch script to fix the identification of the image names of all the source and destination images but somehow I cannot find the edit button on the post to edit the script. Nevertheless, here’s the updated version:

Generate transformation alignment matrix

Yau Mun Lim, University College London, 14 October 2020
Script tested to be working on QuPath v0.2.3.

Adapted from Sara McArdle's post ( to be able to create transformation matrices from aligning target slides to a reference slide stain (refStain) in a single QuPath project containing multiple sets of serial sections.

This script assumes WSI filenames are in the format: slideID_tissueBlock_stain.fileExt

All matrices will be created as aligned to the reference image for every other non-reference (target) images in the project, 
and stored in the Affine folder. Multiple alignments are possible if you want to refine the alignment, so commented out are two more
runs around line 204. Each uses the result from the previous to generate a more accurate alignment. AFFINE or RIGID can be used to
determine the type of alignment.

Only performs intensity-based alignment.

Output stored to the Affine folder within the Project folder.


String registrationType="AFFINE"
String refStain = "H&E"
String wsiExt = ".ndpi"

import javafx.scene.transform.Affine
import qupath.lib.gui.scripting.QPEx
import qupath.lib.images.servers.ImageServer

import java.awt.Graphics2D
import java.awt.Transparency
import java.awt.color.ColorSpace
import java.awt.image.BufferedImage

import org.bytedeco.opencv.opencv_core.Mat;
import org.bytedeco.opencv.opencv_core.TermCriteria;
import org.bytedeco.javacpp.indexer.FloatIndexer;
import org.bytedeco.javacpp.indexer.Indexer;

import qupath.lib.gui.dialogs.Dialogs;
import qupath.lib.images.servers.PixelCalibration;

import qupath.lib.regions.RegionRequest;

import java.awt.image.ComponentColorModel
import java.awt.image.DataBuffer

import static qupath.lib.gui.scripting.QPEx.*;

static BufferedImage ensureGrayScale(BufferedImage img) {
    if (img.getType() == BufferedImage.TYPE_BYTE_GRAY)
        return img;
    if (img.getType() == BufferedImage.TYPE_BYTE_INDEXED) {
        ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_GRAY);
        def colorModel = new ComponentColorModel(cs, 8, false, true,
        return new BufferedImage(colorModel, img.getRaster(), false, null);
    BufferedImage imgGray = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
    Graphics2D g2d = imgGray.createGraphics();
    g2d.drawImage(img, 0, 0, null);
    return imgGray;

def autoAlign(ImageServer<BufferedImage> serverBase, ImageServer<BufferedImage> serverOverlay, String registrationType, Affine affine, double requestedPixelSizeMicrons) {
    PixelCalibration calBase = serverBase.getPixelCalibration();
    double pixelSize = calBase.getAveragedPixelSizeMicrons();
    double downsample = 1;
    if (!Double.isFinite(pixelSize)) {
        while (serverBase.getWidth() / downsample > 2000)
        logger.warn("Pixel size is unavailable! Default downsample value of {} will be used", downsample);
    } else {
        downsample = requestedPixelSizeMicrons / calBase.getAveragedPixelSizeMicrons();

    BufferedImage imgBase = serverBase.readBufferedImage(RegionRequest.createInstance(serverBase.getPath(), downsample, 0, 0, serverBase.getWidth(), serverBase.getHeight()));
    BufferedImage imgOverlay = serverOverlay.readBufferedImage(RegionRequest.createInstance(serverOverlay.getPath(), downsample, 0, 0, serverOverlay.getWidth(), serverOverlay.getHeight()));

    imgBase = ensureGrayScale(imgBase);
    imgOverlay = ensureGrayScale(imgOverlay);

    Mat matBase = OpenCVTools.imageToMat(imgBase);
    Mat matOverlay = OpenCVTools.imageToMat(imgOverlay);

    Mat matTransform = Mat.eye(2, 3, opencv_core.CV_32F).asMat();
// Initialize using existing transform
//		affine.setToTransform(mxx, mxy, tx, myx, myy, ty);
    try {
        FloatIndexer indexer = matTransform.createIndexer()
        indexer.put(0, 0, (float)affine.getMxx());
        indexer.put(0, 1, (float)affine.getMxy());
        indexer.put(0, 2, (float)(affine.getTx() / downsample));
        indexer.put(1, 0, (float)affine.getMyx());
        indexer.put(1, 1, (float)affine.getMyy());
        indexer.put(1, 2, (float)(affine.getTy() / downsample));
//			System.err.println(indexer);
    } catch (Exception e) {
        logger.error("Error closing indexer", e);

//		// Might want to mask out completely black pixels (could indicate missing data)?
//		def matMask = new opencv_core.Mat(matOverlay.size(), opencv_core.CV_8UC1, Scalar.ZERO);
    TermCriteria termCrit = new TermCriteria(TermCriteria.COUNT, 100, 0.0001);
//		OpenCVTools.matToImagePlus(matBase, "Base").show();
//		OpenCVTools.matToImagePlus(matOverlay, "Overlay").show();
//		Mat matTemp = new Mat();
//		opencv_imgproc.warpAffine(matOverlay, matTemp, matTransform, matBase.size());
//		OpenCVTools.matToImagePlus(matTemp, "Transformed").show();
    try {
        int motion;
        switch (registrationType) {
            case "AFFINE":
                motion = opencv_video.MOTION_AFFINE;
            case "RIGID":
                motion = opencv_video.MOTION_EUCLIDEAN;
                logger.warn("Unknown registraton type {} - will use {}", registrationType, "AFFINE");
                motion = opencv_video.MOTION_AFFINE;
        double result = opencv_video.findTransformECC(matBase, matOverlay, matTransform, motion, termCrit, null);"Transformation result: {}", result);
    } catch (Exception e) {
        Dialogs.showErrorNotification("Estimate transform", "Unable to estimated transform - result did not converge");
        logger.error("Unable to estimate transform", e);

// To use the following function, images need to be the same size
//		def matTransform = opencv_video.estimateRigidTransform(matBase, matOverlay, false);
    Indexer indexer = matTransform.createIndexer();
            indexer.getDouble(0, 0),
            indexer.getDouble(0, 1),
            indexer.getDouble(0, 2) * downsample,
            indexer.getDouble(1, 0),
            indexer.getDouble(1, 1),
            indexer.getDouble(1, 2) * downsample

//		matMask.release();


// Get list of all images in project
def projectImageList = getProject().getImageList()

// Create empty lists
def imageNameList = []
def slideIDList = []
def stainList = []
def missingList = []

// Split image file names to desired variables and add to previously created lists
for (entry in projectImageList) {
    def name = entry.getImageName()
    def (imageName, imageExt) = name.split('\\.')
    def (slideID, tissueBlock, stain) = imageName.split('_')
    imageNameList << imageName
    slideIDList << slideID + '_' + tissueBlock
    stainList << stain

// Remove duplicate entries from lists
slideIDList = slideIDList.unique()
stainList = stainList.unique()
if (stainList.size() == 1) {
    print 'Only one stain detected. Target slides may not be loaded.'

// Create Affine folder to put transformation matrix files
path = buildFilePath(PROJECT_BASE_DIR, 'Affine')

// Process all combinations of slide IDs, tissue blocks, and stains based on reference stain slide onto target slides
for (slide in slideIDList) {
    for (stain in stainList) {
        if (stain != refStain) {
            refFileName = slide + "_" + refStain + wsiExt
            targetFileName = slide + "_" + stain + wsiExt
            path = buildFilePath(PROJECT_BASE_DIR, 'Affine', targetFileName)
            def refImage = projectImageList.find {it.getImageName() == refFileName}
            def targetImage = projectImageList.find {it.getImageName() == targetFileName}
            if (refImage == null) {
                print 'Reference slide ' + refFileName + ' missing!'
                missingList << refFileName
            if (targetImage == null) {
                print 'Target slide ' + targetFileName + ' missing!'
                missingList << targetFileName
            println("Aligning reference " + refFileName + " to target " + targetFileName)
            ImageServer<BufferedImage> serverBase = refImage.readImageData().getServer()
            ImageServer<BufferedImage> serverOverlay = targetImage.readImageData().getServer()

            Affine affine=[]


            def matrix = []
            matrix << affine.getMxx()
            matrix << affine.getMxy()
            matrix << affine.getTx()
            matrix << affine.getMyx()
            matrix << affine.getMyy()
            matrix << affine.getTy()

            new File(path).withObjectOutputStream {

if (missingList.isEmpty() == true) {
    print 'Done!'
} else {
    missingList = missingList.unique()
    print 'Done! Missing slides: ' + missingList

Transform annotations from reference slide using transformation matrix

Yau Mun Lim, University College London, 14 October 2020
Script tested to be working on QuPath v0.2.3.

Adapted from Mike Nelson's post ( to work on transformation matrices created from the alignment of multiple target slides onto reference slides in a single QuPath project.

This script assumes WSI filenames are in the format: slideID_tissueBlock_stain.fileExt

If you have annotations within annotations, you may get duplicates. Ask on the forum or change the def pathObjects line.

It will use ALL of the affine transforms in the Affine folder to transform the objects in the reference image to the target images
that are named in the Affine folder. 

Requires creating each affine transformation from the target images so that there are multiple transform files with different names.
// SET ME! Delete existing objects
def deleteExisting = true

// SET ME! Change this if things end up in the wrong place
def createInverse = false

// Specify reference stain
String refStain = "H&E"

import qupath.lib.objects.PathCellObject
import qupath.lib.objects.PathDetectionObject
import qupath.lib.objects.PathObject
import qupath.lib.objects.PathObjects
import qupath.lib.objects.PathTileObject
import qupath.lib.roi.RoiTools
import qupath.lib.roi.interfaces.ROI

import java.awt.geom.AffineTransform

import static qupath.lib.gui.scripting.QPEx.*

// Affine folder path
path = buildFilePath(PROJECT_BASE_DIR, 'Affine')

// Get list of all images in project
def projectImageList = getProject().getImageList()

// Read and obtain filenames from Affine folder
new File(path).eachFile{ f->
    f.withObjectInputStream {
        matrix = it.readObject()

        def targetFileName = f.getName()
        def (targetImageName, imageExt) = targetFileName.split('\\.')
        def (slideID, tissueBlock, targetStain) = targetImageName.split('_')

        def targetImage = projectImageList.find {it.getImageName() == targetFileName}
        if (targetImage == null) {
            print 'Could not find image with name ' + f.getName()
        def targetImageData = targetImage.readImageData()
        def targetHierarchy = targetImageData.getHierarchy()

        refFileName = slideID + "_" + tissueBlock + "_" + refStain + "." + imageExt
        def refImage = projectImageList.find {it.getImageName() == refFileName}
        def refImageData = refImage.readImageData()
        def refHierarchy = refImageData.getHierarchy()

        def pathObjects = refHierarchy.getAnnotationObjects()

        print 'Aligning objects from reference slide ' + refFileName + ' onto target slide ' + targetFileName

        // Define the transformation matrix
        def transform = new AffineTransform(
                matrix[0], matrix[3], matrix[1],
                matrix[4], matrix[2], matrix[5]
        if (createInverse)
            transform = transform.createInverse()
        if (deleteExisting)
        def newObjects = []
        for (pathObject in pathObjects) {
            newObjects << transformObject(pathObject, transform)
print 'Done!'

 * Transform object, recursively transforming all child objects
 * @param pathObject
 * @param transform
 * @return
PathObject transformObject(PathObject pathObject, AffineTransform transform) {
    // Create a new object with the converted ROI
    def roi = pathObject.getROI()
    def roi2 = transformROI(roi, transform)
    def newObject = null
    if (pathObject instanceof PathCellObject) {
        def nucleusROI = pathObject.getNucleusROI()
        if (nucleusROI == null)
            newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
            newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathTileObject) {
        newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathDetectionObject) {
        newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else {
        newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    // Handle child objects
    if (pathObject.hasChildren()) {
        newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)}))
    return newObject

 * Transform ROI (via conversion to Java AWT shape)
 * @param roi
 * @param transform
 * @return
ROI transformROI(ROI roi, AffineTransform transform) {
    def shape = RoiTools.getShape(roi) // Should be able to use roi.getShape() - but there's currently a bug in it for rectangles/ellipses!
    shape2 = transform.createTransformedShape(shape)
    return RoiTools.getShapeROI(shape2, roi.getImagePlane(), 0.5)

Sara McArdle released a newer version of the script Calling Image Alignment function from a script which allows the option to align based on annotations. Given that regions of necrosis in some of our tissue sections can lead to tears and folds, intensity based-alignment alone will fail to converge to a reasonable alignment, at which point manual annotation-based alignment can be used afterwards.

What the script is missing is your method of automatically matching image names based on the slideID_tissueBlock_stain.fileExt format. Do you happen to have a version of the aforementioned script featuring identification of image names of the source and destination images?

Nope, not for now. You can try to adapt and merge both Sara’s and my script to do the annotation-based batch alignment.

It looks like this should be pretty straightforward, with just copy-pasting the top part of the main code. But, if it gives you any trouble, let me know and I’ll try to help.

1 Like

Thanks @smcardle and @ym.lim for your support! I was able to build a set of scripts for whole-slide image alignment and channel concatenation:
Calculate-Transforms.groovy is essentially an updated version of @ym.lim’s script featuring file name matching to support area-based alignment featured in @smcardle’s script. Apply-Transforms.groovy builds off of @petebankhead’s QuPath-Concatenate channels.groovy by matching the images the reference image with the appropriate moving image based on an underscore separating the SlideID from stain. Together, these scripts allow for semi- or fully-automated whole slide image alignment (and optional stain separation if HDAB or H&E).

One bug that I’m unable to resolve is an error during image writing, when one or more of the moving images has been rotated on import (uses the RotatedImageServer). This seems to be caused during the image writing step in lines 127-131 of @petebankhead’s QuPath-Concatenate channels.groovy

I’ve included an example project for debugging in, the image used can be downloaded from Issue is not present when images are not rotated on import. File type seems to be irrelevant (able to reproduce using both .qptiff and .vsi files). To reproduce, run on ‘LuCa-7color-Scan1_reference.qptiff’. Any feedback on how to write out these multichannel ome.tiffs would be greatly appreciated!


If you just need to get it done, could you adjust the affine matrix to the rotation rather than rotating on import?
And yes, not elegant.

1 Like

@Mark_Zaidi can you please post the exact error you see (full stack trace)? I tried to replicate it, but I just get LuCa-7color-Scan1_rot90.qptiff.ome.tif! No compatible writer found. which is presumably because the file extension is unexpected – but not sure if I got all the steps right.

Edit: I see that probably is the error, but it swallows up the real error, which is Unable to read image for BioFormatsImageServer: file:/Users/pete/Documents/QuPath/Images/Training%20images/Perkin%20Elmer/LuCa-7color_Scan1.qptiff[--series, 0]: x=10594, y=36593, w=719, h=-2033, z=0, t=0, downsample=1
	at qupath.lib.images.servers.RotatedImageServer.rotate90(
	at qupath.lib.images.servers.RotatedImageServer.readBufferedImage(
	at qupath.lib.images.servers.RotatedImageServer.readBufferedImage(

True, could do that. Image alignment often fails to converge if the images are rotated more than ~90 degrees. So I’ve been using Mike Nelson’s script, importing without rotation, and manually aligning using the Interactive Image Alignment GUI tool for the cases in which slides were scanned in at different orientations.

1 Like

The error reveals that the requested height is -2033… which can’t be good. This causes a null region and it can’t be written.

Changing the code in to be

if (img == null)
   return null;

in the code below

gets past the error… but I’m not entirely sure what the other consequences of that change may be.
1 Like

INFO: Current image name: LuCa-7color-Scan1_reference.qptiff
INFO: Processing: LuCa-7color-Scan1
INFO: mapentry: LuCa-7color-Scan1_reference.qptiff=AffineTransform[[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]]
INFO: LuCa-7color-Scan1_reference.qptiff
INFO: null
INFO: mapentry: LuCa-7color-Scan1_rot90.qptiff=AffineTransform[[0.804383130731699, -0.594110885478936, 8687.793406759634], [0.594110885478936, 0.804383130731699, -7133.757449451279]]
INFO: LuCa-7color-Scan1_rot90.qptiff
INFO: null
INFO: Channels: 10
WARN: Deleting existing file C:\Users\Mark Zaidi\Documents\QuPath\Image Alignment Demo\LuCa-7color-Scan1_reference.qptiff.ome.tif
INFO: Writing LuCa-7color-Scan1_reference.qptiff - resolution #1 to C:\Users\Mark Zaidi\Documents\QuPath\Image Alignment Demo\LuCa-7color-Scan1_reference.qptiff.ome.tif (series 1/1)
INFO: Writing resolution 1 of 4 (downsample=1.0, 3332 tiles)
INFO: Writing plane 1/10
WARN: Unable to write image
ERROR: IOException at line 242: Unable to write C:\Users\Mark Zaidi\Documents\QuPath\Image Alignment Demo\LuCa-7color-Scan1_reference.qptiff.ome.tif! No compatible writer found.

ERROR: qupath.lib.images.writers.ImageWriterTools.writeImage(
qupath.lib.scripting.QP$writeImage$1.callStatic(Unknown Source)
java.base/java.util.concurrent.Executors$ Source)
java.base/ Source)
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
java.base/java.util.concurrent.ThreadPoolExecutor$ Source)
java.base/ Source)

Are you running apply-transforms only on LuCa-7color-Scan1_reference.qptiff, since that’s the only image in the project that it should be executed for.

On a side note, changing the downsample factor results in the file getting written, but errors still are thrown for the specific tiles it fails on, resulting in a partially corrupted image

Yep, this would be only slightly less manual. Essentially create a project of all of the images where you needed a 90 degree rotation, and use the stored affine transformation, but apply a 90 degree rotation to the affine matrix after importing all of the images “normally.” Also would require a second project, but at least you could batch everything that used the same amount of rotation.

1 Like

Since the problem is buried inside QuPath, I’ve created an issue for it


Since we have 4 stain panels to align for 40 tumours, and each of the panels can be rotated by a random multiple of 90 degrees, with varying rotations within the panels themselves (wasn’t aware that the tissue was being placed on the slide at a random orientation during histology), Mike’s script seems to solve this. Alternatively, I could also rotate the original VSI files prior to import into QuPath

1 Like

Thanks, much appreciated. I’ll try out the hotfix you recommended of modifying, will let you know if I come across any issues with that.

I’ve edited in qupath-core/src/main/java/qupath/lib/images/servers/ as illustrated above, but I’m still getting the same error. Do I need to build QuPath with the edited java file as outlined in My apologies if that’s the case, not that familiar with building software.

Yes – the up-to-date instructions are here:

(Note that you’ll need Java 14… some changes in Java 15 break the process)

1 Like