Calling Image Alignment function from a script

@Mike_Nelson and I wrote a script that aligns all images in a project to a single image.

This comes directly from the QuPath source code and was modified just enough to run as a groovy script (https://github.com/qupath/qupath/blob/master/qupath-experimental/src/main/java/qupath/lib/gui/align/ImageAlignmentPane.java#L748 ).

Set the registration type as “RIGID” or “AFFINE” in line 17. This script only performs intensity-based alignment, not annotation- or points-based.

Run this script from the “source” image. It goes through each other image in the project, aligns it to the source image, and saves the transform matrix as an object in the “AFFINE” subfolder of the project folder with the name of the target image. To transfer annotations between images, switch to another image in the project (one of the target images) and run the scripts linked by @Mike_Nelson below, making sure the createInverse variable is set correctly. Look for a simpler way to do this in the near future :slight_smile:

Written for 0.2.0-m9.

/***********************
Sara McArdle of the La Jolla Institute, March 2020. Copied shamelessly from the QuPath source code (https://github.com/qupath/qupath/blob/master/qupath-experimental/src/main/java/qupath/lib/gui/align/ImageAlignmentPane.java#L748) and adapted to be a valid groovy script.

"For project" adaptation by Michael Nelson

Run from "source" image. All matrices will be created as aligned to that image for every other image in the project, 
and stored in the Affine folder. Multiple alignments are possible if you want to refine the alignment, so commented out are two more
runs around line 168. Each uses the result from the previous to generate a more accurate alignment. AFFINE or RIGID can be used to
determine the type of alignment.

Only performs intensity-based alignment.

Output stored to the Affine folder within the Project folder.

***********************/
//////////////////////////////////
String registrationType="RIGID"
/////////////////////////////////

import javafx.scene.transform.Affine
import qupath.lib.gui.scripting.QPEx
import qupath.lib.images.servers.ImageServer

import java.awt.Graphics2D
import java.awt.Transparency
import java.awt.color.ColorSpace
import java.awt.image.BufferedImage

import org.bytedeco.opencv.global.opencv_core;
import org.bytedeco.opencv.opencv_core.Mat;
import org.bytedeco.opencv.opencv_core.TermCriteria;
import org.bytedeco.opencv.global.opencv_video;
import org.bytedeco.javacpp.indexer.FloatIndexer;
import org.bytedeco.javacpp.indexer.Indexer;

import qupath.lib.gui.dialogs.Dialogs;
import qupath.lib.images.servers.PixelCalibration;

import qupath.lib.regions.RegionRequest;
import qupath.opencv.tools.OpenCVTools

import java.awt.image.ComponentColorModel
import java.awt.image.DataBuffer

import static qupath.lib.gui.scripting.QPEx.*;


static BufferedImage ensureGrayScale(BufferedImage img) {
    if (img.getType() == BufferedImage.TYPE_BYTE_GRAY)
        return img;
    if (img.getType() == BufferedImage.TYPE_BYTE_INDEXED) {
        ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_GRAY);
        def colorModel = new ComponentColorModel(cs, 8, false, true,
                Transparency.OPAQUE,
                DataBuffer.TYPE_BYTE);
        return new BufferedImage(colorModel, img.getRaster(), false, null);
    }
    BufferedImage imgGray = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
    Graphics2D g2d = imgGray.createGraphics();
    g2d.drawImage(img, 0, 0, null);
    g2d.dispose();
    return imgGray;
}

def autoAlign(ImageServer<BufferedImage> serverBase, ImageServer<BufferedImage> serverOverlay, String registrationType, Affine affine, double requestedPixelSizeMicrons) {
    PixelCalibration calBase = serverBase.getPixelCalibration();
    double pixelSize = calBase.getAveragedPixelSizeMicrons();
    double downsample = 1;
    if (!Double.isFinite(pixelSize)) {
        while (serverBase.getWidth() / downsample > 2000)
            downsample++;
        logger.warn("Pixel size is unavailable! Default downsample value of {} will be used", downsample);
    } else {
        downsample = requestedPixelSizeMicrons / calBase.getAveragedPixelSizeMicrons();
    }

    BufferedImage imgBase = serverBase.readBufferedImage(RegionRequest.createInstance(serverBase.getPath(), downsample, 0, 0, serverBase.getWidth(), serverBase.getHeight()));
    BufferedImage imgOverlay = serverOverlay.readBufferedImage(RegionRequest.createInstance(serverOverlay.getPath(), downsample, 0, 0, serverOverlay.getWidth(), serverOverlay.getHeight()));

    imgBase = ensureGrayScale(imgBase);
    imgOverlay = ensureGrayScale(imgOverlay);

    Mat matBase = OpenCVTools.imageToMat(imgBase);
    Mat matOverlay = OpenCVTools.imageToMat(imgOverlay);

    Mat matTransform = Mat.eye(2, 3, opencv_core.CV_32F).asMat();
// Initialize using existing transform
//		affine.setToTransform(mxx, mxy, tx, myx, myy, ty);
    try {
        FloatIndexer indexer = matTransform.createIndexer()
        indexer.put(0, 0, (float)affine.getMxx());
        indexer.put(0, 1, (float)affine.getMxy());
        indexer.put(0, 2, (float)(affine.getTx() / downsample));
        indexer.put(1, 0, (float)affine.getMyx());
        indexer.put(1, 1, (float)affine.getMyy());
        indexer.put(1, 2, (float)(affine.getTy() / downsample));
//			System.err.println(indexer);
    } catch (Exception e) {
        logger.error("Error closing indexer", e);
    }


//		// Might want to mask out completely black pixels (could indicate missing data)?
//		def matMask = new opencv_core.Mat(matOverlay.size(), opencv_core.CV_8UC1, Scalar.ZERO);
    TermCriteria termCrit = new TermCriteria(TermCriteria.COUNT, 100, 0.0001);
//		OpenCVTools.matToImagePlus(matBase, "Base").show();
//		OpenCVTools.matToImagePlus(matOverlay, "Overlay").show();
////
//		Mat matTemp = new Mat();
//		opencv_imgproc.warpAffine(matOverlay, matTemp, matTransform, matBase.size());
//		OpenCVTools.matToImagePlus(matTemp, "Transformed").show();
    try {
        int motion;
        switch (registrationType) {
            case "AFFINE":
                motion = opencv_video.MOTION_AFFINE;
                break;
            case "RIGID":
                motion = opencv_video.MOTION_EUCLIDEAN;
                break;
            default:
                logger.warn("Unknown registraton type {} - will use {}", registrationType, "AFFINE");
                motion = opencv_video.MOTION_AFFINE;
                break;
        }
        double result = opencv_video.findTransformECC(matBase, matOverlay, matTransform, motion, termCrit, null);
        logger.info("Transformation result: {}", result);
    } catch (Exception e) {
        Dialogs.showErrorNotification("Estimate transform", "Unable to estimated transform - result did not converge");
        logger.error("Unable to estimate transform", e);
        return;
    }

// To use the following function, images need to be the same size
//		def matTransform = opencv_video.estimateRigidTransform(matBase, matOverlay, false);
    Indexer indexer = matTransform.createIndexer();
    affine.setToTransform(
            indexer.getDouble(0, 0),
            indexer.getDouble(0, 1),
            indexer.getDouble(0, 2) * downsample,
            indexer.getDouble(1, 0),
            indexer.getDouble(1, 1),
            indexer.getDouble(1, 2) * downsample
    );
    indexer.release();

//		matMask.release();
    matBase.release();
    matOverlay.release();
    matTransform.release();

}

ImageServer<BufferedImage> serverBase= QPEx.getCurrentImageData().getServer()
def baseImageName = getProjectEntry().getImageName()
path = buildFilePath(PROJECT_BASE_DIR, 'Affine')
mkdirs(path)

getProject().getImageList().each{
    if(it.getImageName() != baseImageName){
        println("Working on: "+it)
        path = buildFilePath(PROJECT_BASE_DIR, 'Affine',  it.toString())
        ImageServer<BufferedImage> serverOverlay=it.readImageData().getServer()

        Affine affine=[]

        autoAlign(serverBase,serverOverlay,registrationType,affine,20)
        //autoAlign(serverBase,serverOverlay,registrationType,affine,10)
        //autoAlign(serverBase,serverOverlay,registrationType,affine,5)


        def matrix = []
        matrix << affine.getMxx()
        matrix << affine.getMxy()
        matrix << affine.getTx()
        matrix << affine.getMyx()
        matrix << affine.getMyy()
        matrix << affine.getTy()
        
        new File(path).withObjectOutputStream {
            it.writeObject(matrix)
        }

        /* //To display the transform matrix:
        def readable=String.format(
                "%.4f, \t %.4f,\t %.4f,\n" +
                "%.4f,\t %.4f,\t %.4f",
                affine.getMxx(), affine.getMxy(), affine.getTx(), affine.getMyx(), affine.getMyy(), affine.getTy())

        print(readable)
        */

    }
}
print 'Done!'
7 Likes

Now in the process of adjusting a bunch of the text here:


to reflect the new, easier way to transfer objects around!
The main changes will be that the createInverse variable will need to be flipped between true and false when using Sara’s new script. Otherwise the scripts work as described, either collecting all objects from other images and bringing them into a single image, or distributing all objects from one image to all other images.

1 Like

Hi @smcardle and @Mike_Nelson,

Thank you so much for the great work on making these alignment scripts available!

I am trying to adapt your script to process alignment of different sets of serial sections in one project, e.g.:

A_GFAP.ndpi
A_H&E.ndpi
A_IBA1.ndpi
B_GFAP.ndpi
B_H&E.ndpi
B_IBA1.ndpi

I thought I could specify a reference stain, e.g. H&E, and then align the annotation on the H&E slide to other stains (GFAP, IBA1) with the same slide ID.

I am currently in the middle of adapting the script and I want to understand what some variables are used for. At the moment, ones I am unsure of are:

serverBase in

ImageServer<BufferedImage> serverBase = QPEx.getCurrentImageData().getServer()

and serverOverlay in

ImageServer<BufferedImage> serverOverlay=it.readImageData().getServer()

that are used in the autoAlign command.

My adapted script at the moment is messy but the gist of it is I ideally would not need to have the source image open in the viewer and just grab the necessary information from the slide with the specified reference stain. The two variables above would require an active image opened in the viewer.

If it will be helpful, I can post my script so far on request.

Best wishes,
Yau

1 Like

Hi Yau,
That bit of code was taken directly from the QuPath source code with minimal modification just to make it run in a script outside of the plugin window. @petebankhead would know more details.

But, I think that it.readImageData().getServer() does not require an open image. The “it” is the second image that is being called by name, using the loop : getProject().getImageList().each{}. So, I think you can modify those two lines to reference the H&E image and the other image without having either open. Something like this (did not test this, just going from memory):

HEimg=getProject().getImageList().find{it.getImageName().contains('HE')}
GFAPimg=getProject().getImageList().find{it.getImageName().contains('GFAP')}

ImageServer < BufferedImage > HEserver=HEimg.readImageData().getServer()
ImageServer < BufferedImage > GFAPserver=GFAPimg.readImageData().getServer()

Let me know if this works!

3 Likes

The documentation on reading images from projects – without them being open in the viewer – is at https://qupath.readthedocs.io/en/latest/docs/scripting/overview.html#projects

1 Like

Thanks for the responses so far!

I had a different (but perhaps unnecessarily convoluted way) to specifying the reference and target slides. I did not want to excessively change what I had, so I tried the readImageData().getServer() lines first.

Here’s my code:

String refStain = "H&E"
String wsiExt = ".ndpi"

// Get list of all images in project
String[] projectImageList = getProject().getImageList()

// Create empty lists
def imageNameList = []
def slideIDList = []
def tissueBlockList = []
def stainList = []

// Split image file names to desired variables and add to previously created lists
projectImageList.each{
    def (imageName, imageExt) = it.split('\\.')
    def (slideID, tissueBlock, stain) = imageName.split('_')
    imageNameList << imageName
    slideIDList << slideID
    tissueBlockList << tissueBlock
    stainList << stain
}

// Remove duplicate entries from lists
slideIDList = slideIDList.unique()
tissueBlockList = tissueBlockList.unique()
stainList = stainList.unique()

// Create Affine folder to put transformation matrix files
path = buildFilePath(PROJECT_BASE_DIR, 'Affine')
mkdirs(path)

// Process all combinations of slide IDs, tissue blocks, and stains based on reference stain slide onto target slides
for (slide in slideIDList) {
    for (block in tissueBlockList) {
        for (stain in stainList) {
            if (stain != refStain) {
                println("Working on: " + slide + "_" + block)
                refFileName = slide + "_" + block + "_" + refStain + wsiExt
                targetFileName = slide + "_" + block + "_" + stain + wsiExt
                path = buildFilePath(PROJECT_BASE_DIR, 'Affine', targetFileName)
                ImageServer<BufferedImage> serverBase = refFileName.readImageData().getServer()
                ImageServer<BufferedImage> serverOverlay = targetFileName.readImageData().getServer()

                Affine affine=[]

                autoAlign(serverBase,serverOverlay,registrationType,affine,20)
                //autoAlign(serverBase,serverOverlay,registrationType,affine,10)
                //autoAlign(serverBase,serverOverlay,registrationType,affine,5)

                def matrix = []
                matrix << affine.getMxx()
                matrix << affine.getMxy()
                matrix << affine.getTx()
                matrix << affine.getMyx()
                matrix << affine.getMyy()
                matrix << affine.getTy()

                new File(path).withObjectOutputStream {
                    it.writeObject(matrix)
                }
            }
        }
    }
}

When I ran this (with the usual, untouched alignment codes above it), I got this error:

ERROR: MissingMethodException at line 188: No signature of method: java.lang.String.readImageData() is applicable for argument types: () values: []

The line in question is:

ImageServer<BufferedImage> serverBase = refFileName.readImageData().getServer()

I’ve tested refFileName and targetFileName and both returns legitimate filenames for the images. I suspect that this error meant that these variables are in the wrong format?

1 Like

So this question has taught me some interesting new Groovyness.

What is returned from getProject().getImageList() should be (in Java) a List of ProjectImageEntry.

But the way in which it is declared in your script, it is converted to a String array (presumably using the toString() method applied to each entry).

def images = getProject().getImageList()
println images[0]
println images[0].class

String[] imageNames = getProject().getImageList()
println imageNames[0]
println imageNames[0].class

I hadn’t known Groovy would do that conversion automatically.

In any case, readImageData() is a method that is defined as part of the ProjectImageEntry… not on the String.

1 Like

I converted getProject().getImageList() into String so that I could use split(), otherwise it would give me an error.

Can you please elaborate a bit more on this? I do not fully understand but I assume you meant projectImageList that I used as a list of all the images? Or is it actually meant for something else?

I haven’t tested it, but you could replace the first part of your code with something like this:

This will

  • Not rely upon Groovy’s automatic conversion (which won’t necessarily get the image name only… the toString() method might include metadata)
  • Be a bit easier to read since it involves the each method
  • Preserve the actual ProjectImageEntry objects you need to be able to use readImageData

You will however need to find the right entry to match refFileName etc. later. Something like this should do it

def entry = projectImageList.find {it.getImageName() == refFileName}
ImageServer<BufferedImage> serverBase = entry.getServer()
2 Likes

After implementing @petebankhead’s suggestions, the code works!

Now I’ll be adapting @Mike_Nelson’s script to transform the annotation from the reference image without having the reference image actively opened on the viewer.

3 Likes

I managed to get it to work!

Here’s the script to generate the transformation matrices:

/***********************
Yau Mun Lim, University College London, 16 June 2020
Script tested to be working on QuPath v0.2.0.

Adapted from Sara McArdle's post (https://forum.image.sc/t/calling-image-alignment-function-from-a-script/35617) to be able to create transformation matrices from aligning target slides to a reference slide stain (refStain) in a single QuPath project containing multiple sets of serial sections.

This script assumes WSI filenames are in the format: slideID_tissueBlock_stain.fileExt

All matrices will be created as aligned to the reference image for every other non-reference (target) images in the project, 
and stored in the Affine folder. Multiple alignments are possible if you want to refine the alignment, so commented out are two more
runs around line 204. Each uses the result from the previous to generate a more accurate alignment. AFFINE or RIGID can be used to
determine the type of alignment.

Only performs intensity-based alignment.

Output stored to the Affine folder within the Project folder.

***********************/

//////////////////////////////////
String registrationType="AFFINE"
String refStain = "H&E"
String wsiExt = ".ndpi"
/////////////////////////////////

import javafx.scene.transform.Affine
import qupath.lib.gui.scripting.QPEx
import qupath.lib.images.servers.ImageServer

import java.awt.Graphics2D
import java.awt.Transparency
import java.awt.color.ColorSpace
import java.awt.image.BufferedImage

import org.bytedeco.opencv.global.opencv_core;
import org.bytedeco.opencv.opencv_core.Mat;
import org.bytedeco.opencv.opencv_core.TermCriteria;
import org.bytedeco.opencv.global.opencv_video;
import org.bytedeco.javacpp.indexer.FloatIndexer;
import org.bytedeco.javacpp.indexer.Indexer;

import qupath.lib.gui.dialogs.Dialogs;
import qupath.lib.images.servers.PixelCalibration;

import qupath.lib.regions.RegionRequest;
import qupath.opencv.tools.OpenCVTools

import java.awt.image.ComponentColorModel
import java.awt.image.DataBuffer

import static qupath.lib.gui.scripting.QPEx.*;


static BufferedImage ensureGrayScale(BufferedImage img) {
    if (img.getType() == BufferedImage.TYPE_BYTE_GRAY)
        return img;
    if (img.getType() == BufferedImage.TYPE_BYTE_INDEXED) {
        ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_GRAY);
        def colorModel = new ComponentColorModel(cs, 8, false, true,
                Transparency.OPAQUE,
                DataBuffer.TYPE_BYTE);
        return new BufferedImage(colorModel, img.getRaster(), false, null);
    }
    BufferedImage imgGray = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
    Graphics2D g2d = imgGray.createGraphics();
    g2d.drawImage(img, 0, 0, null);
    g2d.dispose();
    return imgGray;
}

def autoAlign(ImageServer<BufferedImage> serverBase, ImageServer<BufferedImage> serverOverlay, String registrationType, Affine affine, double requestedPixelSizeMicrons) {
    PixelCalibration calBase = serverBase.getPixelCalibration();
    double pixelSize = calBase.getAveragedPixelSizeMicrons();
    double downsample = 1;
    if (!Double.isFinite(pixelSize)) {
        while (serverBase.getWidth() / downsample > 2000)
            downsample++;
        logger.warn("Pixel size is unavailable! Default downsample value of {} will be used", downsample);
    } else {
        downsample = requestedPixelSizeMicrons / calBase.getAveragedPixelSizeMicrons();
    }

    BufferedImage imgBase = serverBase.readBufferedImage(RegionRequest.createInstance(serverBase.getPath(), downsample, 0, 0, serverBase.getWidth(), serverBase.getHeight()));
    BufferedImage imgOverlay = serverOverlay.readBufferedImage(RegionRequest.createInstance(serverOverlay.getPath(), downsample, 0, 0, serverOverlay.getWidth(), serverOverlay.getHeight()));

    imgBase = ensureGrayScale(imgBase);
    imgOverlay = ensureGrayScale(imgOverlay);

    Mat matBase = OpenCVTools.imageToMat(imgBase);
    Mat matOverlay = OpenCVTools.imageToMat(imgOverlay);

    Mat matTransform = Mat.eye(2, 3, opencv_core.CV_32F).asMat();
// Initialize using existing transform
//		affine.setToTransform(mxx, mxy, tx, myx, myy, ty);
    try {
        FloatIndexer indexer = matTransform.createIndexer()
        indexer.put(0, 0, (float)affine.getMxx());
        indexer.put(0, 1, (float)affine.getMxy());
        indexer.put(0, 2, (float)(affine.getTx() / downsample));
        indexer.put(1, 0, (float)affine.getMyx());
        indexer.put(1, 1, (float)affine.getMyy());
        indexer.put(1, 2, (float)(affine.getTy() / downsample));
//			System.err.println(indexer);
    } catch (Exception e) {
        logger.error("Error closing indexer", e);
    }


//		// Might want to mask out completely black pixels (could indicate missing data)?
//		def matMask = new opencv_core.Mat(matOverlay.size(), opencv_core.CV_8UC1, Scalar.ZERO);
    TermCriteria termCrit = new TermCriteria(TermCriteria.COUNT, 100, 0.0001);
//		OpenCVTools.matToImagePlus(matBase, "Base").show();
//		OpenCVTools.matToImagePlus(matOverlay, "Overlay").show();
////
//		Mat matTemp = new Mat();
//		opencv_imgproc.warpAffine(matOverlay, matTemp, matTransform, matBase.size());
//		OpenCVTools.matToImagePlus(matTemp, "Transformed").show();
    try {
        int motion;
        switch (registrationType) {
            case "AFFINE":
                motion = opencv_video.MOTION_AFFINE;
                break;
            case "RIGID":
                motion = opencv_video.MOTION_EUCLIDEAN;
                break;
            default:
                logger.warn("Unknown registraton type {} - will use {}", registrationType, "AFFINE");
                motion = opencv_video.MOTION_AFFINE;
                break;
        }
        double result = opencv_video.findTransformECC(matBase, matOverlay, matTransform, motion, termCrit, null);
        logger.info("Transformation result: {}", result);
    } catch (Exception e) {
        Dialogs.showErrorNotification("Estimate transform", "Unable to estimated transform - result did not converge");
        logger.error("Unable to estimate transform", e);
        return;
    }

// To use the following function, images need to be the same size
//		def matTransform = opencv_video.estimateRigidTransform(matBase, matOverlay, false);
    Indexer indexer = matTransform.createIndexer();
    affine.setToTransform(
            indexer.getDouble(0, 0),
            indexer.getDouble(0, 1),
            indexer.getDouble(0, 2) * downsample,
            indexer.getDouble(1, 0),
            indexer.getDouble(1, 1),
            indexer.getDouble(1, 2) * downsample
    );
    indexer.release();

//		matMask.release();
    matBase.release();
    matOverlay.release();
    matTransform.release();

}

// Get list of all images in project
def projectImageList = getProject().getImageList()

// Create empty lists
def imageNameList = []
def slideIDList = []
def tissueBlockList = []
def stainList = []
def missingList = []

// Split image file names to desired variables and add to previously created lists
for (entry in projectImageList) {
    def name = entry.getImageName()
    def (imageName, imageExt) = name.split('\\.')
    def (slideID, tissueBlock, stain) = imageName.split('_')
    imageNameList << imageName
    slideIDList << slideID
    tissueBlockList << tissueBlock
    stainList << stain
}

// Remove duplicate entries from lists
slideIDList = slideIDList.unique()
tissueBlockList = tissueBlockList.unique()
stainList = stainList.unique()

// Create Affine folder to put transformation matrix files
path = buildFilePath(PROJECT_BASE_DIR, 'Affine')
mkdirs(path)

// Process all combinations of slide IDs, tissue blocks, and stains based on reference stain slide onto target slides
for (slide in slideIDList) {
    for (block in tissueBlockList) {
        for (stain in stainList) {
            if (stain != refStain) {
                refFileName = slide + "_" + block + "_" + refStain + wsiExt
                targetFileName = slide + "_" + block + "_" + stain + wsiExt
                path = buildFilePath(PROJECT_BASE_DIR, 'Affine', targetFileName)
                def refImage = projectImageList.find {it.getImageName() == refFileName}
                def targetImage = projectImageList.find {it.getImageName() == targetFileName}
                if (refImage == null) {
                    print 'Reference slide ' + refFileName + ' missing!'
                    missingList << refFileName
                    continue
                }
                if (targetImage == null) {
                    print 'Target slide ' + targetFileName + ' missing!'
                    missingList << targetFileName
                    continue
                }
                println("Aligning reference " + refFileName + " to target " + targetFileName)
                ImageServer<BufferedImage> serverBase = refImage.readImageData().getServer()
                ImageServer<BufferedImage> serverOverlay = targetImage.readImageData().getServer()

                Affine affine=[]

                autoAlign(serverBase,serverOverlay,registrationType,affine,20)
                autoAlign(serverBase,serverOverlay,registrationType,affine,10)
                autoAlign(serverBase,serverOverlay,registrationType,affine,5)

                def matrix = []
                matrix << affine.getMxx()
                matrix << affine.getMxy()
                matrix << affine.getTx()
                matrix << affine.getMyx()
                matrix << affine.getMyy()
                matrix << affine.getTy()

                new File(path).withObjectOutputStream {
                    it.writeObject(matrix)
                }
            }
        }
    }
}

if (missingList.isEmpty() == true) {
    print 'Done!'
} else {
    missingList = missingList.unique()
    print 'Done! Missing slides: ' + missingList
}

and the script to transform objects from the reference slide onto the target slides:

/***********************
Yau Mun Lim, University College London, 11 June 2020
Script tested to be working on QuPath v0.2.0.

Adapted from Mike Nelson's post (https://forum.image.sc/t/qupath-multiple-image-alignment-and-object-transfer/35521/2) to work on transformation matrices created from the alignment of multiple target slides onto reference slides in a single QuPath project.

This script assumes WSI filenames are in the format: slideID_tissueBlock_stain.fileExt

If you have annotations within annotations, you may get duplicates. Ask on the forum or change the def pathObjects line.

It will use ALL of the affine transforms in the Affine folder to transform the objects in the reference image to the target images
that are named in the Affine folder. 

Requires creating each affine transformation from the target images so that there are multiple transform files with different names.
***********************/
 
// SET ME! Delete existing objects
def deleteExisting = true

// SET ME! Change this if things end up in the wrong place
def createInverse = false

// Specify reference stain
String refStain = "H&E"

import qupath.lib.objects.PathCellObject
import qupath.lib.objects.PathDetectionObject
import qupath.lib.objects.PathObject
import qupath.lib.objects.PathObjects
import qupath.lib.objects.PathTileObject
import qupath.lib.roi.RoiTools
import qupath.lib.roi.interfaces.ROI

import java.awt.geom.AffineTransform

import static qupath.lib.gui.scripting.QPEx.*

// Affine folder path
path = buildFilePath(PROJECT_BASE_DIR, 'Affine')

// Get list of all images in project
def projectImageList = getProject().getImageList()

new File(path).eachFile{ f->
    f.withObjectInputStream {
        matrix = it.readObject()

        def targetFileName = f.getName()
        def (targetImageName, imageExt) = targetFileName.split('\\.')
        def (slideID, tissueBlock, targetStain) = targetImageName.split('_')

        def targetImage = projectImageList.find {it.getImageName() == targetFileName}
        if (targetImage == null) {
            print 'Could not find image with name ' + f.getName()
            return
        }
        def targetImageData = targetImage.readImageData()
        def targetHierarchy = targetImageData.getHierarchy()

        refFileName = slideID + "_" + tissueBlock + "_" + refStain + "." + imageExt
        def refImage = projectImageList.find {it.getImageName() == refFileName}
        def refImageData = refImage.readImageData()
        def refHierarchy = refImageData.getHierarchy()

        def pathObjects = refHierarchy.getAnnotationObjects()

        print 'Aligning objects from reference slide ' + refFileName + ' onto target slide ' + targetFileName

        // Define the transformation matrix
        def transform = new AffineTransform(
                matrix[0], matrix[3], matrix[1],
                matrix[4], matrix[2], matrix[5]
        )
        if (createInverse)
            transform = transform.createInverse()
            
        if (deleteExisting)
            targetHierarchy.clearAll()
            
        def newObjects = []
        for (pathObject in pathObjects) {
            newObjects << transformObject(pathObject, transform)
        }
        targetHierarchy.addPathObjects(newObjects)
        targetImage.saveImageData(targetImageData)
    }
}
print 'Done!'

/**
 * Transform object, recursively transforming all child objects
 *
 * @param pathObject
 * @param transform
 * @return
 */
PathObject transformObject(PathObject pathObject, AffineTransform transform) {
    // Create a new object with the converted ROI
    def roi = pathObject.getROI()
    def roi2 = transformROI(roi, transform)
    def newObject = null
    if (pathObject instanceof PathCellObject) {
        def nucleusROI = pathObject.getNucleusROI()
        if (nucleusROI == null)
            newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
        else
            newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathTileObject) {
        newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathDetectionObject) {
        newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else {
        newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    }
    // Handle child objects
    if (pathObject.hasChildren()) {
        newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)}))
    }
    return newObject
}

/**
 * Transform ROI (via conversion to Java AWT shape)
 *
 * @param roi
 * @param transform
 * @return
 */
ROI transformROI(ROI roi, AffineTransform transform) {
    def shape = RoiTools.getShape(roi) // Should be able to use roi.getShape() - but there's currently a bug in it for rectangles/ellipses!
    shape2 = transform.createTransformedShape(shape)
    return RoiTools.getShapeROI(shape2, roi.getImagePlane(), 0.5)
}

It looked like it worked on my end but the transformation isn’t that great, which could be due to my images and not the code. I’ll be grateful if these scripts can be tested!

Thanks @petebankhead, @smcardle, @Mike_Nelson!

2 Likes

Without the images it would be hard to test, but I am glad you got it all working! I am not sure if we can help, but if you can elaborate on “the transformation isn’t that great,” one of us might be able to come up with something. Or if you are able to share one set of the original images on GoogleDrive or some other file sharing service.

If the alignment is consistently wrong in terms of tilt, I would definitely make sure the createInverse is set in the correct direction for the way in which you are aligning and transferring the objects (try flipping it).

Alternatively, if they are images of sequential slices, which seems likely given the different stains, they will never really align, as expected. It would be like slicing an orange and trying to put it back together as a cylinder. Even 5 micron slices are relatively long distances at the cellular level.

*And for what it’s worth, even strip stains don’t align well enough for cellular measurements (double positive across slices) without deformable image registration.

The problem is most likely due to scanning artifacts on the slides I tested the script on. Many of them have parts of the tissue touching or cut off at the edge of the scanned image.

The transformation improved slightly when I tried to use pixel size 5 in the alignment, which suggests that createInverse is set correctly.

I’m working on clinical samples, so I’m hesitant to share the slides.

But if you do have a set of serial sections named in the format of slideID_tissue_stain, e.g.:

A_brain_H&E.ndpi
A_brain_GFAP.ndpi
A_brain_IBA1.ndpi
B_brain_H&E.ndpi
B_brain_GFAP.ndpi
B_brain_IBA1.ndpi
C_brain_H&E.ndpi
C_brain_GFAP.ndpi
C_brain_IBA1.ndpi

the scripts should work for you to test!

I’m glad it’s (mostly) working!

How do the transformations look when you use the original plugin?

Here are my test alignments at:

Pixel size 20

Pixel size 5

Reference slide is H&E. It aligns well for the middle image, but not so much the right image. It did improve with pixel size 5 vs 20, but I wish it could be a bit better.

That looks pretty good given the damage/differences to the tissue between images. To get better you would need deformable image registration, and even that often makes things worse in the case of situations like the middle image where areas of tissue are “missing” (folded back). as the deformable alignment tries to stretch existing tissue into that space to fill it up.

While it doesn’t work with the automatic alignment, you might want to manually try Point annotations to register particular landmarks that are most important to you.
image

@Mike_Nelson I tested the alignment on more images and I agree with you that on pixel size 5, they are pretty good! I can accept if the alignment does not conform around missing tissue areas, I understand that it is not easy to implement.

It also seems that if the tissue touches or is clipped at the edges, the alignment gets thrown off. I guess the solution here is to rescan the slides and ensure that the whole tissue area is scanned.

1 Like

New version!

The script does 3 new things:

  1. Allows for annotation-based alignment within a script
  2. Handles aligning images with very different pixel sizes.
  3. Automatically grabs the detection objects in the second image and transfers them to the “base” image.

Basic workflow: have 2 images in a project of the same slide acquired at different resolutions. Create a tissue annotation for each. Create detection objects in image1. Change otherFile in the script to be the name of image1. Go to image2. Run this script. See where the image1 detections appear in image 2.

For annotation-based alignment, I copied the QuPath source code with a little bit of modification for it to run without the interactive window.

I also modified the QuPath code for dealing with the different pixel sizes. It now uses different downsample amounts for the 2 images (to be the same pixel size), returns the ratio of the two as a value, and scales the final transform by that ratio. Note: the alignment will fail if one of your images does not have a pixel size in microns.

Thanks to @Mike_Nelson for his help, especially with the function to transfer objects between images!

/***
 * Script to align 2 images in the project with different pixel sizes, using either intensities or annotations.
 * Run from base image and include the name of the template image in otherFile.
 * Writes the affine transform to an object inside the Affine subfolder of your project folder.
 * Also grabs the detection objects from the template image. Can change this to annotation objects.
 *
 * Needed inputs: name of template image, what values to use for alignment (intensity or annotation), what type of registration (RIGID or AFFINE), what pixel sizes to use for iterative registration refinement. 
 *
 * Written by Sara McArdle of the La Jolla Institute, 2020, with lots of help from Mike Nelson.
 *
 */
import static qupath.lib.gui.scripting.QPEx.*

import qupath.lib.objects.PathCellObject
import qupath.lib.objects.PathDetectionObject
import qupath.lib.objects.PathObject
import qupath.lib.objects.PathObjects
import qupath.lib.objects.PathTileObject
import qupath.lib.objects.classes.PathClassFactory
import qupath.lib.roi.RoiTools
import qupath.lib.roi.interfaces.ROI
import qupath.lib.gui.dialogs.Dialogs
import qupath.lib.images.servers.ImageServer
import qupath.lib.images.servers.PixelCalibration
import qupath.lib.regions.RegionRequest
import qupath.opencv.tools.OpenCVTools

import java.awt.Graphics2D
import java.awt.Transparency
import java.awt.color.ColorSpace
import java.awt.geom.AffineTransform
import java.awt.image.ComponentColorModel
import java.awt.image.DataBuffer
import java.awt.image.BufferedImage
import javafx.scene.transform.Affine

import org.bytedeco.javacpp.indexer.FloatIndexer
import org.bytedeco.opencv.opencv_core.Mat
import org.bytedeco.opencv.opencv_core.TermCriteria
import org.bytedeco.opencv.global.opencv_core
import org.bytedeco.opencv.global.opencv_video
import org.bytedeco.javacpp.indexer.Indexer

//set these things
String alignmentType="AREA"  //"AREA" will use annotations. Any other string will use intensities.
otherFile='Zinc_raw.tiff' //Name of template image

//collect basic information
baseImageName = getProjectEntry().getImageName()
def imageDataBase=getCurrentImageData()
def imageDataSelected=project.getImageList().find{it.getImageName()==otherFile}.readImageData()
double otherPixelSize=imageDataSelected.getServer().getPixelCalibration().getAveragedPixelSizeMicrons()


//perform affine transforms. Start with a large pixel size and iteratively refine the transform. NOTE: the final pixel size in the last autoAlignPrep SHOULD NOT be smaller than the pixel size of your least resolved image
Affine affine= []
autoAlignPrep(200.0,alignmentType,imageDataBase,imageDataSelected,affine,"RIGID")
autoAlignPrep(100.0,alignmentType,imageDataBase,imageDataSelected,affine,"RIGID")
def scaleFactor=autoAlignPrep(otherPixelSize,alignmentType,imageDataBase,imageDataSelected,affine,"AFFINE") //may want to change to RIGID, depending on your application.

//deal with the differing downsample amounts
affine.prependScale(1/scaleFactor,1/scaleFactor)

//save the transform as an object in the project folder
def matrix = []
matrix << affine.getMxx()
matrix << affine.getMxy()
matrix << affine.getTx()
matrix << affine.getMyx()
matrix << affine.getMyy()
matrix << affine.getTy()

affinepath = buildFilePath(PROJECT_BASE_DIR, 'Affine '+baseImageName)
mkdirs(affinepath)
path = buildFilePath(PROJECT_BASE_DIR, 'Affine '+baseImageName,  otherFile+".aff")

new File(path).withObjectOutputStream {
    it.writeObject(matrix)
}


//gather all annotation objects in the otherFile
new File(affinepath).eachFile{ f->
    GatherObjects(false, true, f)
}


/*Subfunctions taken from here:
https://github.com/qupath/qupath/blob/a1465014c458d510336993802efb08f440b50cc1/qupath-experimental/src/main/java/qupath/lib/gui/align/ImageAlignmentPane.java
 */

//creates an image server using the actual images (for intensity-based alignment) or a labeled image server (for annotation-based).
double autoAlignPrep(double requestedPixelSizeMicrons, String alignmentMethod, ImageData<BufferedImage> imageDataBase, ImageData<BufferedImage> imageDataSelected, Affine affine,String registrationType) throws IOException {
    ImageServer<BufferedImage> serverBase, serverSelected;

    if (alignmentMethod == 'AREA') {
        logger.debug("Image alignment using area annotations");
        Map<PathClass, Integer> labels = new LinkedHashMap<>();
        int label = 1;
        labels.put(PathClassFactory.getPathClassUnclassified(), label++);
        for (def annotation : imageDataBase.getHierarchy().getAnnotationObjects()) {
            def pathClass = annotation.getPathClass();
            if (pathClass != null && !labels.containsKey(pathClass))
                labels.put(pathClass, label++);
        }
        for (def annotation : imageDataSelected.getHierarchy().getAnnotationObjects()) {
            def pathClass = annotation.getPathClass();
            if (pathClass != null && !labels.containsKey(pathClass))
                labels.put(pathClass, label++);
        }

        double downsampleBase = requestedPixelSizeMicrons / imageDataBase.getServer().getPixelCalibration().getAveragedPixelSize().doubleValue();
        serverBase = new LabeledImageServer.Builder(imageDataBase)
                .backgroundLabel(0)
                .addLabels(labels)
                .downsample(downsampleBase)
                .build();

        double downsampleSelected = requestedPixelSizeMicrons / imageDataSelected.getServer().getPixelCalibration().getAveragedPixelSize().doubleValue();
        serverSelected = new LabeledImageServer.Builder(imageDataSelected)
                .backgroundLabel(0)
                .addLabels(labels)
                .downsample(downsampleSelected)
                .build();


    } else {
        // Default - just use intensities
        logger.debug("Image alignment using intensities");
        serverBase = imageDataBase.getServer();
        serverSelected = imageDataSelected.getServer();
    }

    scaleFactor=autoAlign(serverBase, serverSelected, registrationType, affine, requestedPixelSizeMicrons);
    return scaleFactor
}

double autoAlign(ImageServer<BufferedImage> serverBase, ImageServer<BufferedImage> serverOverlay, String regionstrationType, Affine affine, double requestedPixelSizeMicrons) {
    PixelCalibration calBase = serverBase.getPixelCalibration()
    double pixelSizeBase = calBase.getAveragedPixelSizeMicrons()
    double downsampleBase = 1
    if (!Double.isFinite(pixelSizeBase)) {
      //  while (serverBase.getWidth() / downsampleBase > 2000)
       //     downsampleBase++;
       // logger.warn("Pixel size is unavailable! Default downsample value of {} will be used", downsampleBase)
        pixelSizeBase=50
        downsampleBase = requestedPixelSizeMicrons / pixelSizeBase
    } else {
        downsampleBase = requestedPixelSizeMicrons / pixelSizeBase
    }

    PixelCalibration calOverlay = serverOverlay.getPixelCalibration()
    double pixelSizeOverlay = calOverlay.getAveragedPixelSizeMicrons()
    double downsampleOverlay = 1
    if (!Double.isFinite(pixelSizeOverlay)) {
    //    while (serverBase.getWidth() / downsampleOverlay > 2000)
    //        downsampleOverlay++;
     //   logger.warn("Pixel size is unavailable! Default downsample value of {} will be used", downsampleOverlay)
        pixelSizeOverlay=50
        downsampleOverlay = requestedPixelSizeMicrons / pixelSizeOverlay
    } else {
        downsampleOverlay = requestedPixelSizeMicrons / pixelSizeOverlay
    }

    double scaleFactor=downsampleBase/downsampleOverlay

    BufferedImage imgBase = serverBase.readBufferedImage(RegionRequest.createInstance(serverBase.getPath(), downsampleBase, 0, 0, serverBase.getWidth(), serverBase.getHeight()))
    BufferedImage imgOverlay = serverOverlay.readBufferedImage(RegionRequest.createInstance(serverOverlay.getPath(), downsampleOverlay, 0, 0, serverOverlay.getWidth(), serverOverlay.getHeight()))

    imgBase = ensureGrayScale(imgBase)
    imgOverlay = ensureGrayScale(imgOverlay)

    Mat matBase = OpenCVTools.imageToMat(imgBase)
    Mat matOverlay = OpenCVTools.imageToMat(imgOverlay)

    Mat matTransform = Mat.eye(2, 3, opencv_core.CV_32F).asMat()
// Initialize using existing transform
//		affine.setToTransform(mxx, mxy, tx, myx, myy, ty)
    try {
        FloatIndexer indexer = matTransform.createIndexer()
        indexer.put(0, 0, (float)affine.getMxx())
        indexer.put(0, 1, (float)affine.getMxy())
        indexer.put(0, 2, (float)(affine.getTx() / downsampleBase))
        indexer.put(1, 0, (float)affine.getMyx())
        indexer.put(1, 1, (float)affine.getMyy())
        indexer.put(1, 2, (float)(affine.getTy() / downsampleBase))
//			System.err.println(indexer)
    } catch (Exception e) {
        logger.error("Error closing indexer", e)
    }

    TermCriteria termCrit = new TermCriteria(TermCriteria.COUNT, 100, 0.0001)

    try {
        int motion
        switch (regionstrationType) {
            case "AFFINE":
                motion = opencv_video.MOTION_AFFINE
                break
            case "RIGID":
                motion = opencv_video.MOTION_EUCLIDEAN
                break
            default:
                logger.warn("Unknown registraton type {} - will use {}", regionstrationType, RegistrationType.AFFINE)
                motion = opencv_video.MOTION_AFFINE
                break
        }
        double result = opencv_video.findTransformECC(matBase, matOverlay, matTransform, motion, termCrit, null)
        logger.info("Transformation result: {}", result)
    } catch (Exception e) {
        Dialogs.showErrorNotification("Estimate transform", "Unable to estimated transform - result did not converge")
        logger.error("Unable to estimate transform", e)
        return
    }

// To use the following function, images need to be the same size
//		def matTransform = opencv_video.estimateRigidTransform(matBase, matOverlay, false);
    Indexer indexer = matTransform.createIndexer()
    affine.setToTransform(
            indexer.getDouble(0, 0),
            indexer.getDouble(0, 1),
            indexer.getDouble(0, 2) * downsampleBase,
            indexer.getDouble(1, 0),
            indexer.getDouble(1, 1),
            indexer.getDouble(1, 2) * downsampleBase
    )
    indexer.release()

    matBase.release()
    matOverlay.release()
    matTransform.release()

    return scaleFactor
}

//to gather detection objects instead of annotation, change line ~250 to def pathObjects = otherHierarchy.getDetectionObjects()
def GatherObjects(boolean deleteExisting, boolean createInverse, File f){
    f.withObjectInputStream {
        matrix = it.readObject()

        // Get the project & the requested image name
        def project = getProject()
        def entry = project.getImageList().find {it.getImageName()+".aff" == f.getName()}
        if (entry == null) {
            print 'Could not find image with name ' + f.getName()
            return
        }

        def otherHierarchy = entry.readHierarchy()
        def pathObjects = otherHierarchy.getDetectionObjects() //OR getAnnotationObjects()

        // Define the transformation matrix
        def transform = new AffineTransform(
                matrix[0], matrix[3], matrix[1],
                matrix[4], matrix[2], matrix[5]
        )
        if (createInverse)
            transform = transform.createInverse()

        if (deleteExisting)
            clearAllObjects()

        def newObjects = []
        for (pathObject in pathObjects) {
            newObjects << transformObject(pathObject, transform)
        }
        addObjects(newObjects)
    }
}

//other subfunctions

PathObject transformObject(PathObject pathObject, AffineTransform transform) {
    // Create a new object with the converted ROI
    def roi = pathObject.getROI()
    def roi2 = transformROI(roi, transform)
    def newObject = null
    if (pathObject instanceof PathCellObject) {
        def nucleusROI = pathObject.getNucleusROI()
        if (nucleusROI == null)
            newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
        else
            newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathTileObject) {
        newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathDetectionObject) {
        newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
        newObject.setName(pathObject.getName())
    } else {
        newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
        newObject.setName(pathObject.getName())
    }
    // Handle child objects
    if (pathObject.hasChildren()) {
        newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)}))
    }
    return newObject
}

ROI transformROI(ROI roi, AffineTransform transform) {
    def shape = RoiTools.getShape(roi) // Should be able to use roi.getShape() - but there's currently a bug in it for rectangles/ellipses!
    shape2 = transform.createTransformedShape(shape)
    return RoiTools.getShapeROI(shape2, roi.getImagePlane(), 0.5)
}

static BufferedImage ensureGrayScale(BufferedImage img) {
    if (img.getType() == BufferedImage.TYPE_BYTE_GRAY)
        return img
    if (img.getType() == BufferedImage.TYPE_BYTE_INDEXED) {
        ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_GRAY)
        def colorModel = new ComponentColorModel(cs, 8 as int[], false, true,
                Transparency.OPAQUE,
                DataBuffer.TYPE_BYTE)
        return new BufferedImage(colorModel, img.getRaster(), false, null)
    }
    BufferedImage imgGray = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_BYTE_GRAY)
    Graphics2D g2d = imgGray.createGraphics()
    g2d.drawImage(img, 0, 0, null)
    g2d.dispose()
    return imgGray
}
4 Likes

Hi Sara, sorry to revive an old post.

I just got round to trying the alignment script in a project and I was wondering what the result from the code below actually reports.

I found some details on the OpenCV documentation for findTransformECC() but I don’t quite understand how a single number can represent the “geometric transform (warp) between two images”.

Can the number reported in result be interpreted as how good a fit the transformation/alignment is? E.g. if result = 0.951762537, would that mean the alignment is ~95.17% accurate?

From my observations, larger numbers from result do seem to have a better alignment compared to smaller ones.

EDIT: I checked and saw that this code is present in the QuPath source code, so I assume it’s been implemented by @petebankhead ; by any chance would you know what result reports?

Many thanks,
Yau

Hi @ym.lim I’m not sure what it means myself… the OpenCV docs don’t seem to explain, although it is used in an example here: OpenCV: samples/cpp/image_alignment.cpp

It seems -1 is interpreted as a special value.

My guess is that it may be the ‘Enhanced Correlation Coefficient’ referenced here:
https://docs.opencv.org/4.5.0/dc/d6b/group__video__track.html#gae94247c0014ff6497a3f85e60eab0a66
But I don’t know for sure – you might need to dig into the OpenCV source code for a definitive answer.

The transform is stored in matTransform, which is what QuPath really needs.