Interactive image alignment

qupath

#1

Hi Guys

Great work on the new release, I wanted to ask a question regarding the image alignment feature which seem to work really well and get really close overlays of 2 serial sections. I do realise this is a experimental features however is there a way to
i) Stack the two aligned multichannel images images(similar to the proposal here with ability to get rid of dublicate channels ie DAPi
ii) Analyse the hyperplexed images there after
iii) as a short term solution to duplicate the annotation after alignment .
Thank you

mustafa


#2

I think the answer is somewhere between ‘not yet’ and ‘only with quite a bit of scripting’.

The command currently exists mostly to test the ability to overlay one whole slide image on top of another as a first step towards something more useful. Although already it is somewhat useful, in that you can annotate on top of the image using the overlay as a guide (optionally with modified opacity). The Wand tool will also use this information.

It ultimately is heading in the direction you suggest, but isn’t something I am actively working on right now due to a rather severe lack of time… and also because I strongly suspect that solving that problem will lead to further requests from others to improve on ‘really close’ - thus demanding more unavailable time :slight_smile:

Nevertheless, it does already show the affine transform matrix. It should be possible to use this in a script to update the location of all ROIs in a .qpdata file accordingly, so you could be able to generate objects on one image and import them (transformed) on the other.

Is what what you mean by iii / would it be useful?


#3

Hi Pete

Thanks for the reply, i totally appreciate your point on time.

on point iii) if the alignment can somewhat be saved on the overplay image , then the the annotations duplicated on both image , so that we can compare the population of cells in the identical ROI across two section, That would be really helpful


#4

Mustafa (and Pete),
did you find this image alignment feature simply by playing with the new release or is this documented somewhere? (I searched, but no success).
Thanks
Martin


#5

The alignment feature is only in 0.2.0 and can be found in the Analyze menu.


#6

Is there a documentation or video on how to use the image alignment feature?


#7

I don’t believe so at this time. It wasn’t one of the featured upgrades. I am sure there will be something once the functionality is more polished/complete. Busy week for Pete, though. :slight_smile:


#8

No documentation I’m afraid, nor really any intention that it should be used yet… it’s a work in progress (albeit one not actively progressing). Documentation lags behind…


#9

Here’s a script you can try for transferring objects from one image to another, applying an affine transform along the way.

Note there are parameters that need to be set at the top - specifically the affine transform (which you can get from the interactive align command) and the name of the other image (with the objects) on it, which should be in the same project.

/**
 * Script to transfer QuPath objects from one image to another, applying an AffineTransform to any ROIs.
 */

// SET ME! Define transformation matrix
// Get this from 'Interactive image alignment (experimental)
def matrix = [
        -0.998, -0.070, 127256.994,
        0.070, -0.998, 72627.371
]

// SET ME! Define image containing the original objects (must be in the current project)
def otherImageName = null

// SET ME! Delete existing objects
def deleteExisting = true

// SET ME! Change this if things end up in the wrong place
def createInverse = true


import qupath.lib.gui.helpers.DisplayHelpers
import qupath.lib.objects.PathCellObject
import qupath.lib.objects.PathDetectionObject
import qupath.lib.objects.PathObject
import qupath.lib.objects.PathObjects
import qupath.lib.objects.PathTileObject
import qupath.lib.roi.PathROIToolsAwt
import qupath.lib.roi.interfaces.ROI

import java.awt.geom.AffineTransform

import static qupath.lib.gui.scripting.QPEx.*

if (otherImageName == null) {
    DisplayHelpers.showErrorNotification("Transform objects", "Please specify an image name in the script!")
    return
}

// Get the project & the requested image name
def project = getProject()
def entry = project.getImageList().find {it.getImageName() == otherImageName}
if (entry == null) {
    print 'Could not find image with name ' + otherImageName
    return
}

def otherHierarchy = entry.readHierarchy()
def pathObjects = otherHierarchy.getRootObject().getChildObjects()

// Define the transformation matrix
def transform = new AffineTransform(
        matrix[0], matrix[3], matrix[1],
        matrix[4], matrix[2], matrix[5]
)
if (createInverse)
    transform = transform.createInverse()

if (deleteExisting)
    clearAllObjects()

def newObjects = []
for (pathObject in pathObjects) {
    newObjects << transformObject(pathObject, transform)
}
addObjects(newObjects)

print 'Done!'

/**
 * Transform object, recursively transforming all child objects
 *
 * @param pathObject
 * @param transform
 * @return
 */
PathObject transformObject(PathObject pathObject, AffineTransform transform) {
    // Create a new object with the converted ROI
    def roi = pathObject.getROI()
    def roi2 = transformROI(roi, transform)
    def newObject = null
    if (pathObject instanceof PathCellObject) {
        def nucleusROI = pathObject.getNucleusROI()
        if (nucleusROI == null)
            newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
        else
            newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathTileObject) {
        newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathDetectionObject) {
        newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else {
        newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    }
    // Handle child objects
    if (pathObject.hasChildren()) {
        newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)}))
    }
    return newObject
}

/**
 * Transform ROI (via conversion to Java AWT shape)
 *
 * @param roi
 * @param transform
 * @return
 */
ROI transformROI(ROI roi, AffineTransform transform) {
    def shape = PathROIToolsAwt.getShape(roi) // Should be able to use roi.getShape() - but there's currently a bug in it for rectangles/ellipses!
    shape2 = transform.createTransformedShape(shape)
    return PathROIToolsAwt.getShapeROI(shape2, roi.getC(), roi.getZ(), roi.getT(), 0.5)
}

#10

*Hi I’m new to imageJ and I could not be able to create a .obj 3D file without losing some data from the .raw file stack.
*Basically the .raw file contains number of images arranged in stack.
*These images are black and white images only. Each and every pixel in this image is very important.
*After importing the .raw file into the imagej it can be saved as a .obj generally. But while saving this .obj even after with the low threshold and re-sampling factor as 1, some of the pixel datas are missing in the 3d model.
*Can anyone help me with this issue please?
A small information will be really useful too. All I need is a 3d model output from the Image stack without any loss in data.
Thanks in advance.


#11

Hi Pete

The above solution is exactly what I was looking for, today i managed to run the script and it does transfer the annotation however it does not transform the image as per alignment although I do get a DONE output

INFO: Reading hierarchy from TGU118_AM.qpdata... INFO: Done!

Any suggestion would very much be appreciated


#12

Did you set the matrix values at the top of the script, and try switching createInverse = false?

Do the objects appear under the ‘Hierarchy’ tab, but just not on the image?

Is there any change that annotations are currently hidden…?

I wrote the script rather quickly, but it works for me (n=1). To figure out where it is going wrong I’d need more details about what you’ve tried, or else I can adapt the script later to print out these details.


#13

Hi Pete

Yes I did set the matrix values
I tried the creatInverse=fasle
the annotation are in Hierarchy
The annotation are transferring although the alignment transformation doesn’t seem to happen

Here is the scripts as is

'/**

  • Script to transfer QuPath objects from one image to another, applying an AffineTransform to any ROIs.
    */

// Matrix From the alignment

def matrix = [
0.995, -0.105, 4549.200,
0.105, 0.995, -18068.189
]

// SET ME! Define image containing the original objects (must be in the current project)
def otherImageName = ‘TGU118_AM’

// SET ME! Delete existing objects
def deleteExisting = true

// SET ME! Change this if things end up in the wrong place
def createInverse = true

import qupath.lib.gui.helpers.DisplayHelpers
import qupath.lib.objects.PathCellObject
import qupath.lib.objects.PathDetectionObject
import qupath.lib.objects.PathObject
import qupath.lib.objects.PathObjects
import qupath.lib.objects.PathTileObject
import qupath.lib.roi.PathROIToolsAwt
import qupath.lib.roi.interfaces.ROI

import java.awt.geom.AffineTransform

import static qupath.lib.gui.scripting.QPEx.*

if (otherImageName == null) {
DisplayHelpers.showErrorNotification(“Transform objects”, “Please specify an image name in the script!”)
return
}

// Get the project & the requested image name
def project = getProject()
def entry = project.getImageList().find {it.getImageName() == otherImageName}
if (entry == null) {
print 'Could not find image with name ’ + otherImageName
return
}

def otherHierarchy = entry.readHierarchy()
def pathObjects = otherHierarchy.getRootObject().getChildObjects()

// Define the transformation matrix
def transform = new AffineTransform(
matrix[0], matrix[3], matrix[1],
matrix[4], matrix[2], matrix[5]
)
if (createInverse)
transform = transform.createInverse()

if (deleteExisting)
clearAllObjects()

def newObjects =
for (pathObject in pathObjects) {
newObjects << transformObject(pathObject, transform)
}
addObjects(newObjects)

print ‘Done!’

/**

  • Transform object, recursively transforming all child objects
  • @param pathObject
  • @param transform
  • @return
    */
    PathObject transformObject(PathObject pathObject, AffineTransform transform) {
    // Create a new object with the converted ROI
    def roi = pathObject.getROI()
    def roi2 = transformROI(roi, transform)
    def newObject = null
    if (pathObject instanceof PathCellObject) {
    def nucleusROI = pathObject.getNucleusROI()
    if (nucleusROI == null)
    newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    else
    newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathTileObject) {
    newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else if (pathObject instanceof PathDetectionObject) {
    newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    } else {
    newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
    }
    // Handle child objects
    if (pathObject.hasChildren()) {
    newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)}))
    }
    return newObject
    }

/**

  • Transform ROI (via conversion to Java AWT shape)
  • @param roi
  • @param transform
  • @return
    */
    ROI transformROI(ROI roi, AffineTransform transform) {
    def shape = PathROIToolsAwt.getShape(roi) // Should be able to use roi.getShape() - but there’s currently a bug in it for rectangles/ellipses!
    shape2 = transform.createTransformedShape(shape)
    return PathROIToolsAwt.getShapeROI(shape2, roi.getC(), roi.getZ(), roi.getT(), 0.5)
    }’

#14

Hi Pete,

I emailed you ages ago about this kind of overlaying tool. I have been using the experimental version and it is working great, speeding analysis up massively. I basically select an area, line up the overlay of the multiplex, detect CD3 positive cells on the first layer and toggle through the other markers, then manually classify all of the CD3 positive cells according to the other markers (CD4, CD8 and FOXP3).

Obviously a fully automated version would be even more fantastic…However, in the meantime I thought that a (possibly straightforward) functional improvement would be to have a “show layer mini viewer” ie a mini viewer that contained each of the overlays side by side just like the “show channel mini viewer” shows each channel side by side.

Thanks for all the great work. One of my PhD students from Southampton is now up in Edinburgh post doc and has been keeping me up to date with your advancements!

BW,

Alistair


#15

I wanted to check what you mean by manually classify, just to be sure! Are you duplicating all of the detections onto the new image, then running something like Add cell intensity measurements (this script allows the added measurements to be split into cytoplasmic/nuclear means) or Subcellular detections to get quantitative measures of the other stains? I want to make sure you have options in case you had someone cycling through individual cells!

If you add measurements or create and store subcellular detection data within each cell, you should be able to eventually get a data file that contains cells with information from all stains, which you can then use Measurement Maps etc to look at in detail.


#16

Thanks @alistair_easton - that email remains starred in my inbox to remind me to return to it whenever this is more fully resolved… but glad the initial overlay view is already helping!


#17

Hi,
I’m detecting positive cells on one layer (CD3) then overlaying the images for CD4, CD8 and FOXP3 (3 separate images) then toggling through each layer for each detected cell and deciding whether it is CD3 positive, CD4 positive CD8 positive or FOX P3 positive. I’m doing this for each individual cell in selected high power fields. I can’t see any way to detect the cells on one layer then get it to automatically follow these CD3 positive cells through each layer and detect whether they are also positive for each marker.