QuPath- Multiple Image Alignment and Object Transfer

Hi all! Here is a set of scripts focused on image alignment (as of 0.2.0M9).

First off, I would like to thank Dirk Homann and Verena van der Heide at Mt. Sinai for having a fun project to work on! Also @petebankhead @smcardle and @Zbigniew_Mikulski for their continuous support and encouragement.

Ask away if you have any questions; I think I am starting to get the hang of the alignment and transfer.

These are all built off of the giant thread that started the transfer of objects between aligned images here (M5 or 6 I think?), so almost all of the credit goes to Pete!


And moving on to M9, one of the variants I will discuss is based off of Pete’s script here:

That said, I’ve been interested in taking this to the project level for either sequential slices or multiple stains. Transferring objects between images one at a time is good, but multiple images would be better! So I’ll start with the setup needed, how it works and why, and then go on to transferring objects from multiple images to one image (to combine data), and then from one image back out to all images.

ALERT

The entire first post has almost immediately been superseded by Sara’s post here:


I am adjusting the text of the scripts to reverse the direction of transfer, as her script runs everything from the source/base/fixed image, which is the opposite of the method used below, and is much easier!
Scripts in the second and third post have not really changed, aside from the createInverse variable.

The foundation

The basics unit of all of this is the transform matrix itself. If you have used the original script in first link above, you are probably used to copying six numbers into the field to perform the transformation. Here, we will be saving that matrix as a file, then accessing each of those files in turn to bridge multiple images within the project.

The command itself is found here: Analyze/Interactive image alignment (experimental)


And looks like this when you first open it in an image

A few notes about interacting with this dialog.

  1. Choose images from project - This button does what it says, and allows you to add multiple images from the project at one time. I will not be using it that way, as I prefer to do a little more setup to allow for significant downstream simplification and time savings.
  2. The Opacity bar - This does nothing when you only have one image open, but once you have a second image in your list AND selected, it will allow you to control which image you see, or both! All of the way to the right, and you will see only the selected image (not your original image, moving image), and all the way to the left will show you your original image (fixed image).
    From here on, I will refer to the original image as the ‘fixed’ image and the second image as the ‘moving’ image.
  3. The rotate left and right buttons allow you to rotate the moving image manually.
  4. The fun part. Registration type can usually be left at Affine, but as I learned from @smcardle, Rigid will sometimes lead to better results if you know that there should be no shear or scale changes (if you tissue is not expected to shrink or warp).
    https://en.wikipedia.org/wiki/Affine_transformation
    Rigid limits you to the other transformations listed in the link.
    I have found that for Alignment type, Image intensity is usually fine. If you have already created accurate Simple Tissue Detection annotation objects in each image, that might be faster. The only time I have considered using Points is when attempting to align fluorescent images, by using the DAPI channel to create points from detected cells. If your fluorophores are consistent enough in distribution and intensity, you may not need that.
    Pixel size - Higher is faster, and less accurate. Lower if you want more accuracy and can sacrifice time.
  5. Click Estimate transform to populate the box at “5”


At this point the list of numbers could be copied into the matrix variable in Pete’s script and you could transfer objects between images! In this image, you can see there was a slight shift between the two images, indicated by the ~50% opaque border on the right.

The Invert option comes into play depending on whether you started with the image with your objects in it. If you wan to transfer an object from your fixed image to your moving image, you use the transform as normal with createInverse as true (and you run Pete’s script from the destination/moving image - Top half of the image below). But what if you wanted to transfer an object in the other direction, from the moving image back into the fixed image? Or lets say you ran the Interactive image alignment with the destination image open? In that case you would set createInverse (scripts in the following two posts) to false (bottom half of image below).

If you perform the alignment from the image with the objects in it, createInverse should be “false” within the script. However, with the way I am going to set this up for a whole project, we will want to invert the transformation since all transforms will be done from the destination image, for naming purposes. This can be seen in the image above, where the object is in the moving image (the target of each of our affine transforms, which is the same in each case), and the object transfer is happening in the opposite direction.


Image above shows the overall results. Blue arrows indicate the creation of affine matrices between each fixed image and the moving image that contains the objects, and then the red arrows show the inverse matrix being used to place the original objects into each of the desired images.

To start with, we shall take the transform as displayed and store it to a file using the following script. As it warns in the script, you CAN have multiple Image overlay alignment dialogs open at once (if you want to create many shadow images). I am not sure what the results would be in that case. Please only have one open at a time for this script :slight_smile:

//Script writes out a file with the name of the current image, and the Affine Transformation in effect in the current viewer.
//Can get confused if there is more than one overlay active at once.
//Current image should be the destination image
// Michael Nelson 03/2020

def name = getProjectEntry().getImageName()
path = buildFilePath(PROJECT_BASE_DIR, 'Affine')
mkdirs(path)
path = buildFilePath(PROJECT_BASE_DIR, 'Affine', name)



import qupath.lib.gui.align.ImageServerOverlay

def overlay = getCurrentViewer().getCustomOverlayLayers().find {it instanceof ImageServerOverlay}

affine = overlay.getAffine()

print affine
afString = affine.toString()
afString = afString.minus('Affine [').minus(']').trim().split('\n')
cleanAffine =[]
afString.each{
    temp = it.split(',')
    temp.each{cleanAffine << Double.parseDouble(it)}
}

def matrix = []
affineList = [0,1,3,4,5,7]
for (i=0;i<12; i++){
if (affineList.contains(i))
    matrix << cleanAffine[i]
}

new File(path).withObjectOutputStream {
    it.writeObject(matrix)
}
print 'Done!'

The result will be a folder within your project called Affine which has a file containing the transformation matrix. The file will have the exact name of the image it was run from. This naming scheme is why you run each Interactive Image Alignment from the destination images. In fact, the only image you will not run the alignment script from is your base image (say, pan-CK for a tumor).

Another version of the script allows for more manual control, if desired, but can’t be Run for Project…

//Paste matrix between brackets for 'matrix' variable
//Current image should be the destination image
// Michael Nelson 03/2020
def name = getProjectEntry().getImageName()
path = buildFilePath(PROJECT_BASE_DIR, 'Affine')
mkdirs(path)
path = buildFilePath(PROJECT_BASE_DIR, 'Affine', name)

def matrix = []

new File(path).withObjectOutputStream {
    it.writeObject(matrix)
}
print 'Done!'

Use the second one if you want to run all transforms from a single image, or change the naming scheme.

5 Likes

Distributing annotations from one image to many images

Now I have a project, and there are a bunch of images I want to send my initial annotations to! I would go to each of those images, and run the above script to automatically create a file in the Project/Affine folder that contains the transform.
Make sure to close the Interactive alignment window when you switch images, as the fixed image DOES NOT CHANGE to your current image when you change images within QuPath.

It would look something like this after getting a couple of transforms.


The files are given the exact same name as the image the script was run from, so while they say they are NDPI files, they are not actually. This is fine, and everything should work.

Next go to the source image with the objects to distribute, and run the following script…
After making a backup copy of your project. Set the “deleteExisting” toggle to true if you want to remove any annotations or detections that might have been sitting around in the destination images.

/**
If you have annotations within annotations, you may get duplicates. Ask on the forum or change the def pathObjects line.

To use, have all objects desired in one image, and alignment files in the Affine folder within your project folder.
If you have not saved those, this script will not work.
It will use ALL of the affine transforms in that folder to transform the objects in the current image to the destination images
that are named after the affine files. 

Requires creating each affine transformation from the destination images so that there are multiple transform files with different names.
Michael Nelson 03/2020
Script base on: https://forum.image.sc/t/interactive-image-alignment/23745/9
and adjusted thanks to Pete's script: https://forum.image.sc/t/writing-objects-to-another-qpdata-file-in-the-project/35495/2
 */
 
// SET ME! Delete existing objects
def deleteExisting = true

// SET ME! Change this if things end up in the wrong place
def createInverse = false

import qupath.lib.objects.PathCellObject
import qupath.lib.objects.PathDetectionObject
import qupath.lib.objects.PathObject
import qupath.lib.objects.PathObjects
import qupath.lib.objects.PathTileObject
import qupath.lib.roi.RoiTools
import qupath.lib.roi.interfaces.ROI

import java.awt.geom.AffineTransform

import static qupath.lib.gui.scripting.QPEx.*

path = buildFilePath(PROJECT_BASE_DIR, 'Affine')

new File(path).eachFile{ f->
    f.withObjectInputStream {
        matrix = it.readObject()


def name = getProjectEntry().getImageName()


// Get the project & the requested image name
def project = getProject()
def entry = project.getImageList().find {it.getImageName() == f.getName()}
if (entry == null) {
    print 'Could not find image with name ' + f.getName()
    return
}
def imageData = entry.readImageData()
def otherHierarchy = imageData.getHierarchy()
def pathObjects = getAnnotationObjects()

// Define the transformation matrix
def transform = new AffineTransform(
        matrix[0], matrix[3], matrix[1],
        matrix[4], matrix[2], matrix[5]
)
if (createInverse)
    transform = transform.createInverse()
    
if (deleteExisting)
    otherHierarchy.clearAll()
    
def newObjects = []
for (pathObject in pathObjects) {
    newObjects << transformObject(pathObject, transform)
}
otherHierarchy.addPathObjects(newObjects)
entry.saveImageData(imageData)
}
}
print 'Done!'

/**
 * Transform object, recursively transforming all child objects
 *
 * @param pathObject
 * @param transform
 * @return
 */
PathObject transformObject(PathObject pathObject, AffineTransform transform) {
    // Create a new object with the converted ROI
    def roi = pathObject.getROI()
    def roi2 = transformROI(roi, transform)
    def newObject = null
    if (pathObject instanceof PathCellObject) {
        def nucleusROI = pathObject.getNucleusROI()
        if (nucleusROI == null)
            newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
        else
            newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathTileObject) {
        newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathDetectionObject) {
        newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else {
        newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    }
    // Handle child objects
    if (pathObject.hasChildren()) {
        newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)}))
    }
    return newObject
}

/**
 * Transform ROI (via conversion to Java AWT shape)
 *
 * @param roi
 * @param transform
 * @return
 */
ROI transformROI(ROI roi, AffineTransform transform) {
    def shape = RoiTools.getShape(roi) // Should be able to use roi.getShape() - but there's currently a bug in it for rectangles/ellipses!
    shape2 = transform.createTransformedShape(shape)
    return RoiTools.getShapeROI(shape2, roi.getImagePlane(), 0.5)
}

This script should not require any inputs or adjustment! You have already done all of the setup. It takes all of the objects in the currently open image, searches for all of files in the Affine folder (so do not stick any random files or folders in there), and applies those affine transformations (Inverted) to the objects before writing them into the destination files… which are the same as the affine file names!

Tada! Three different images with aligned objects shown (notice the background in the lower left, mostly).

1 Like

Collecting objects from many images together into one image

Ok, last option. What if there are a bunch of different objects in each of those images, and I want to transfer them all into one image? Say I created regions of positivity for each marker, and wanted to compare those images within a single image. I’m going to use very obvious example objects this time.

In this case, I want to have the destination image open, and all of the files in the “Affine” folder should be the names of images I want to pull objects from.

/**
 * Script to transfer QuPath objects from one group of images to another single image, applying an AffineTransform to any ROIs.
 * This script should be run from the image you want to move the objects into, and will access all files within the Affine subfolder.
 * You must have generated Affine files first through another script before using this script.
 1. Create objects in source images.
 2. Create alignments to destination image from within each of the source images.
 3. Run this script from the destination image.
 
Script base on Pete's here: https://forum.image.sc/t/interactive-image-alignment/23745/9

 Michael Nelson 3/2020
 * All objects in the source images should be imported into the destination image. 
 */
 
// SET ME! Delete existing objects
def deleteExisting = false

// SET ME! Change this if things end up in the wrong place
def createInverse = true

import qupath.lib.objects.PathCellObject
import qupath.lib.objects.PathDetectionObject
import qupath.lib.objects.PathObject
import qupath.lib.objects.PathObjects
import qupath.lib.objects.PathTileObject
import qupath.lib.roi.RoiTools
import qupath.lib.roi.interfaces.ROI

import java.awt.geom.AffineTransform

import static qupath.lib.gui.scripting.QPEx.*

path = buildFilePath(PROJECT_BASE_DIR, 'Affine')

new File(path).eachFile{ f->
    f.withObjectInputStream {
        matrix = it.readObject()


def name = getProjectEntry().getImageName()


// Get the project & the requested image name
def project = getProject()
def entry = project.getImageList().find {it.getImageName() == f.getName()}
if (entry == null) {
    print 'Could not find image with name ' + f.getName()
    return
}

def otherHierarchy = entry.readHierarchy()
def pathObjects = otherHierarchy.getAnnotationObjects()

// Define the transformation matrix
def transform = new AffineTransform(
        matrix[0], matrix[3], matrix[1],
        matrix[4], matrix[2], matrix[5]
)
if (createInverse)
    transform = transform.createInverse()

if (deleteExisting)
    clearAllObjects()

def newObjects = []
for (pathObject in pathObjects) {
    newObjects << transformObject(pathObject, transform)
}
addObjects(newObjects)
}
}
print 'Done!'

/**
 * Transform object, recursively transforming all child objects
 *
 * @param pathObject
 * @param transform
 * @return
 */
PathObject transformObject(PathObject pathObject, AffineTransform transform) {
    // Create a new object with the converted ROI
    def roi = pathObject.getROI()
    def roi2 = transformROI(roi, transform)
    def newObject = null
    if (pathObject instanceof PathCellObject) {
        def nucleusROI = pathObject.getNucleusROI()
        if (nucleusROI == null)
            newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
        else
            newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathTileObject) {
        newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathDetectionObject) {
        newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else {
        newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    }
    // Handle child objects
    if (pathObject.hasChildren()) {
        newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)}))
    }
    return newObject
}

/**
 * Transform ROI (via conversion to Java AWT shape)
 *
 * @param roi
 * @param transform
 * @return
 */
ROI transformROI(ROI roi, AffineTransform transform) {
    def shape = RoiTools.getShape(roi) // Should be able to use roi.getShape() - but there's currently a bug in it for rectangles/ellipses!
    shape2 = transform.createTransformedShape(shape)
    return RoiTools.getShapeROI(shape2, roi.getImagePlane(), 0.5)
}

Notice that createInverse is now false, since I am moving objects back in the opposite direction, from the images where I created the transform to the single moving image.
The right most image was the destination image.

1 Like

Actually, one more post. Lets say I now run the Distribution script from that final image… with deleteExisting set to TRUE of course, to prevent duplicate objects!


*I did have to click on each Multiviewer window and “Reload Data” in the File menu. The objects are created directly to the data file, so they will not show up automatically in the Multiviewer. Normally this should not be an issue.

The best part of all of this, if I want to start over, I can just delete all of the objects and transfer them again VERY quickly. Change a cell detection algorithm, a pixel classifier, or something else? All of the affine transforms are still the same, and still stored in the folder. For more complex projects, the folder structure could be expanded to use metadata or some of the file name to determine which list of affine transformations to pull from.

1 Like

First post edited with a comment and redirect link, and second and third posts edited to be in line with the better method of generating matrix files posted by @smcardle.

Bumping this due to the questions about alignment and object transfer. Ask away if you have any questions. We hope to have some sort of video tutorial at some point in the future.

Note that all scripts were for 0.2.0M9, and might need to be adjusted (not sure) at some point due to the rapid pace of changes in QuPath during the last/next few weeks.

Dear @Mike_Nelson, @petebankhead and team,
I have a question about an alignment issue related to TMAs. I have 2 TMA images with 2 different stainings which I would like to align with each other. I can do a rough alignment by having them open side by side in the multi-viewer of course, but I would prefer to have them “properly” aligned. However, when I try to run the auto-align interactive alignment feature, I get an error message saying “Unable to estimate transform, results did not converge”
I also tried creating annotations around a few of the circles and a box around the whole set of tissue but it didn’t help. Do you have a suggestion for how I could align these? Will it help to first de-array the TMAs?

Thank you very much in advance for your time & help
(hope this is the appropriate place to post this, website newbie here)

It would help to have a little bit more information about the files themselves, and maybe an image of the overlay. There are many things that could cause the results to not converge, including having too high of a resolution on first run. When we automated the alignment workflow, @smcardle and @Zbigniew_Mikulski ended up using a series of alignments, each building on the last. So the first had 20pixel resolution, then 10, then 5. Jumping straight to 5 in many cases caused the same error.

You might also have better luck using simple tissue detection and aligning based on that… BUT, many TMAs I have seen are very messy. Missing cores, damage to tissue, etc. That can also cause problems with any kind of alignment, since some of the area you want to align is missing. Missing data is bad :slight_smile:

Of course, I will try my best to provide this.
So I have 2 .mrxs files which were scanned with a panoramic slide scanner and show 9X12 tissue cores. The majority are preserved in both scans. Tissue detection worked very nicely but sadly didn’t solve the alignment problem. I used the 20pixel resolution and even increased to 50 and 100 just on the off chance that would do anything but it didn’t.

I totally agree that some of the missing data bits will obviously cause a problem (I got this from someone in my group asking for help). It seems that QuPath doesn’t want to shift 1 of the scans low/high enough to overlap the tissue annotations. Do you think this could potentially be solved by doing some pre-QuPath cropping, e.g. in CaseViewer? I have attached 2 screenshots, of my first slide

and then of the overlap where you can see that the red circles are sort of a few rows off from being aligned with the top.

I hope this is illustrating my problem better. Thank you very much for your help :slight_smile:

1 Like

That could be due to the position on the slide (I think MRXS uses that) being very different between the TMAs. Have you tried manually shifting the TMA using shift drag (as described in the dialog, top right) and then running the alignment? Note that if this is working, the Current affine transformation at the bottom should be updating dynamically for the rightmost two positions in the matrix. The automated alignment will then be based off of this.

Note that to perform the alignment type “Area annotations”, annotations need to be present and, I think, saved in both images. I have not yet looked into scripting alignment by Points or annotations, but it might be possible to put points in the four corners of each image automatically, then align those first, followed by an intensity or tissue based alignment.

1 Like

I didn’t realise I can shift the images within the alignment field with the shift+drag option (I feel like such an idiot) but it worked on the very first go! I had done the tissue selection and annotations and saved it for both TMAs.
I have now also been able to fine-tune by decreasing the pixel resolution from 20 to 5 and it’s perfect. Thank you so much for your speedy responses. I love QuPath but am still very much a novice.

4 Likes