Image alignment: use affine matrix from QuPath in other software

Dear QuPath users,

I have several consecutive sections that have been stained with H-DAB for two different antigens.
I am trying to align these sections, so that I can look at co-localization.

For this I have tried two things:

  • Qupath Image Overlay Alignment > this gives me great alignment, but the aligned images cannot be exported, ‘saved’ or used otherwise
  • Exporting the image to Image J and use something like HyperstackReg (first deconvolute the H and DAB color channels, then register the images using the hematoxylin and then stack the images to obtain a ‘double-staining’ or immunofluorescence-like image.

While the second approach works, I find that the alignment shown in the example window in Qupath is very much superior to Hyperstackreg.

I then tried a third approach, where I copy the Affine matrix (a 3x2 matrix) and use this matrix in TransformJ (which asks for a 4x4 matrix). I just copy the matrix that I get from QuPath and leave the rest as is.

For example:

0.9360035061836243 -0.24853743612766266 1485.8318112182617 0
0.24227723479270935 0.934749960899353 990.21954647827150 0
0 0 1 0
0 0 0 1

If I apply this transformation to the image this looks okay visually, but when I stack the images together in a Stack using the “Images to Stack” using “Copy (center)” in ImageJ there is still a clear mismatch.

Do you have a suggestion on how to use the transformation matrix that QuPath outputs in an external program? Or is this means of stacking the images the problem?
I feel it would be a shame to use an ‘inferior’ approach of alignment.

Best, Justin

First off, if by colocalization you mean per cell and not general tissue type markers, sequential slices are generally pretty bad at that. Many people are interested in immune cells that will show up strongly in one slice and then not at all in the next. Just something to keep in mind. If one of your markers is a general tumor marker like CK and the other is something specific, and you want to look at the distance to tumor, that works fairly well.

Have you looked into analyzing the the pairs of images by moving objects back and forth within them through QuPath as posted here?

Also, when creating aligned DAB exports from multiple images in FIJI (10channel images, fun), I had better luck with TrakEM2.

Dear Research_Associate,

Thanks for the reply. I have already found the thread you are referring to and as you say it describes moving objects, but not the images. I am trying to create a three-color image (stain 1; stain2; hematox) and analyze that image.

Below you can see the merged files. In the HyperStackReg file I have only displayed the matched hematox channels, not the actual stain so you can better see the merge.

I

Qupath does it a lot better, that’s why I was hoping to use the transformation matrix that Qupath generates to do the alignment.

I have also looked into TrakEM2, but I have only found the manual image alignment option. Does TrakEM2 also contain an automated means of image merging?

Yes, though it has been a while since I had used it, there was a YouTube video that ran through it. I haven’t done anything as complicated as extracting each hematoxylin channel, aligning those, and then applying the transform to the DAB channels.

I thought I remembered that the imageTransformServer could be used to potentially write out shifted images, but I would recommend being careful about the degradation of your image as you rotate it. Transferring the objects from image to image does help avoid that, but I realize it isn’t as visually appealing for presentations. You should get similar results in the end, though. Hopefully. Hard to rotate an image anything other than 90 degrees accurately.

I’m also not sure QuPath is doing a better job of the alignment so much as favoring some of the disconnected parts in the lower left. Looking at the top right, the alignment seems to be better in your upper image. At least that’s my impression from the two images. I think you would need deformable image registration to make a better match.

Thanks for the suggestion on TrakEM2, it worked great.

The extraction of the hematoxylin channels was only a workaround because HyperStackReg will only register 8-bit images. TrakEM2 does a really good job at aligning the whole (unaltered color photo) image! I can deconvolve the image from the aligned output and then recreate the “double-stain” from there.

The next step is to automate this process via scripts.
I found a nice starting point here:

Or do you maybe have an old piece of code in an old folder somewhere? :slight_smile:

1 Like

That part I never got to, as I only ever did test images, and they have mostly decided against doing that kind of analysis since the high plex cell segmentation would be inaccurate for the cytoplasmic stains. Even in the same 5um slice (restained), you have cells overlapping each other above and below, so you would all too frequently see impossibly double positive cells.

Ended up going with other types of analyses based on overall stain areas per object, entirely within QuPath (intersection of positively stained ROIs, and similar).

If you do get the TrakEM2 scripting working in batch, I’m sure there are all sorts of people on the forum that would appreciate a description of how exactly it was done, though!

@JSNL I haven’t used TrakEM2 myself, but if it can do what you need then I think it may well be the best solution.

For something QuPath-only, see the answer I’ve just posted at Images registration/alignement, export aligned images

Thanks Pete! The script, as well as the new QuPath release look really nice and useful, I will try it out. It’s great that you made it usable for both IF/multichannel and ‘full color’ images.

2 Likes

Hi, I am trying to save an image with an affine transformation applied to it. I tried to use this script, but I am getting this error message:

ERROR: NullPointerException at line 91: Cannot invoke method readImageData() on null object

ERROR: org.codehaus.groovy.runtime.NullObject.invokeMethod(NullObject.java:91)
    org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:44)
    org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
    org.codehaus.groovy.runtime.callsite.NullCallSite.call(NullCallSite.java:34)
    org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
    qupath.lib.projects.ProjectImageEntry$readImageData$0.call(Unknown Source)
    Script5.run(Script5.groovy:92)
    org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:317)
    org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:155)
    qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:926)
    qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:859)
    qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:782)
    qupath.lib.gui.scripting.DefaultScriptEditor$2.run(DefaultScriptEditor.java:1271)
    java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
    java.base/java.util.concurrent.FutureTask.run(Unknown Source)
    java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    java.base/java.lang.Thread.run(Unknown Source)

A little bit more about what I am trying to run, I have QuPath 0.2 and opened an H&E and an IHC slide in a new project. I set the image types to H&E and HDAB. With the H&E slide open I did the transformation on the IHC image to match up to the H&E. The name of the IHC file is “002-Ki67.svs”.

These are the only parts of the script that I changed:

// Define a transform, e.g. with the (also unfinished) 'Interactive image alignment' command
// Note: you may need to remove .createInverse() depending upon how the transform is created
def os3Transform = GeometryTools.convertTransform(new AffineTransformation([-0.0175, 0.9998, 914.2124,
                                                                            -0.9998, -0.0175, 14496.8809] as double[])).createInverse()

// Define a map from the image name to the transform that should be applied to that image
def transforms = [
        '002-Ki67.svs': new AffineTransform(), // Identity transform (use this if no transform is needed)
        '002-1-Ki67.svs': os3Transform
]

Thank you!

I would guess that the “entry” doesn’t exist, and the line before that:
def entry = project.getImageList().find {it.getImageName() == name}
failed.
You may want to try print statements like
print it.getImageName()
and
print name
to verify that they will be equal (prior to the line the error is on).

Thank you Research_Associate! I added the print statements:

for (def mapEntry : transforms.entrySet()) {
    // Find the next image & transform
    def name = mapEntry.getKey()
    print name
    def transform = mapEntry.getValue()
    if (transform == null)
        transform = new AffineTransform()
    def entry = project.getImageList().find {it.getImageName() == name}
    print it.getImageName()
    // Read the image & check if it has stains (for deconvolution)

I don’t think it is properly performing “getImageName” because this is the output:

INFO: 002-Ki67.svs
ERROR: I cannot find 'it'!

ERROR: MissingPropertyException at line 92: No such property: it for class: Script9

ERROR: org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:65)
    org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:51)
    org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:341)
    Script9.run(Script9.groovy:93)
    org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:317)
    org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:155)
    qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:926)
    qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:859)
    qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:782)
    qupath.lib.gui.scripting.DefaultScriptEditor$2.run(DefaultScriptEditor.java:1271)
    java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
    java.base/java.util.concurrent.FutureTask.run(Unknown Source)
    java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    java.base/java.lang.Thread.run(Unknown Source)

Could you try to
print project.getImageList()
I assume you have a project created since you would need to to have two images, but I’m curious what the project consists of that might be causing a problem.

This is what is returned:

INFO: [001-HE.svs, 002-Ki67.svs, 003-P16.svs]
INFO: 002-Ki67.svs
ERROR: I cannot find 'it'!

With the rest of the error message after that.

Fair, it.getImageName() would actually need to be inside the find{} statement, sloppy of me. it is defined there, and so having the print statement

print it.getImageName()

isn’t actually doing anything useful, as it is referring to the Map at that point. Remove that.

Hmm.
Just kind of flailing here, Pete will probably know better.
You could try to replace the def entry line with:

def entry = project.getImageList().find {it.toString() == name}

since your imageList appears to be correct.

It is kind of working now. I changed the file names in the transform definition, previously I gave the ‘002-Ki67.svs’ different names:

// Define a map from the image name to the transform that should be applied to that image
def transforms = [
        '002-Ki67.svs': new AffineTransform(), // Identity transform (use this if no transform is needed)
        '002-Ki67.svs': os3Transform
]

However, the image that is written looks like a fluorescence image:
Original:

File written with script:

The affine transformation was applied, although I just wanted to apply the transformation without changing the colors. Perhaps the colors were also inverted?

I believe that is correct and intended; you were writing color deconvolved channels, so the resulting image should be “fluorescent,” or at least a stack of 6 monocolor channels. You have lost the RGB information when you extracted the color deconvolved channels, and this is the cleanest remaining way to visualize all of the channels (since in Optical Density, the base value is ~0, and increasing values are increasing amounts of stain, just like IF). Two different slides could have two different backgrounds, different stain vectors, and different weird ways to represent the combinations of stains (Eosin +DAB as red and brown would be messsssy!).

The math to try to create a white background would be a little more complex than inverting each image, I think. Though I am not an expert. Not saying it is impossible (seen similar in InForm), but I’m not confident it is desirable.

If you want the two images transformed without changing any of the colors, you might be able to adapt the script to write a 6 channel RGBx2 image instead of a 6 channel H&E+DAB image. It would look REALLY weird, though, except as one of the two single RGB original images :slight_smile:

This is intended, and explained in the script comments (the paragraph starting at line 23):

Here is a simpler script you might try: https://gist.github.com/petebankhead/6d3a220074a1cb99caf6dc92ab71bfed

It only applies an affine transform to the current image, and writes the result.

Note the warning about pixel size (if you’re rescaling) - I’ve just noticed it when writing the script, and created an issue for it at https://github.com/qupath/qupath/issues/528

To apply an affine transform to an RGB image in general, you might also check out libvips:
https://libvips.github.io/libvips/API/current/using-cli.html

libvips supports reading pyramidal images (with OpenSlide) and also writing them. It seems to also suppose applying a pre-defined affine transform (but I haven’t tried it myself). Typically, libvips seems to be much faster at writing images than anything else I’ve tried – but compatibility with downstream software needs to be checked, and metadata might be lost.

3 Likes

Great! It worked and did not take too much time to write the new file. Thank you much!

3 Likes

Hi JSNL,

First, you need to specify what file size range you are referring to. For image pyramids, try visiting this MatLab post (https://www.mathworks.com/help/images/warp-big-image.html) and translate it to Python. The input is the tform, however, it would need a bigimage supported format like TIFF and BigTIFF (https://www.mathworks.com/help/images/ref/bigimage.html).

If you manage to do it on Python, please contact me. Thank you. Cheers,