Automatically align and rotate images

Hi @lakshmi,

We’re looking to align whole images, rather than trying to stitch them. They could be imagined as z planes. Sorry if that wasn’t clear. To align properly there will need to be some vertical and horizontal movement, which isn’t too difficult, but they’ll also need a bit of rotation which CellProfiler doesn’t seem to be able to do automatically.


1 Like

Hi @DStirling,

I recommend checking out:

by @Christian_Tischer

It’s a wrapper around elastix, a tool I use often and have had good experiences with.



@bogovicj - That seems to work perfectly, thanks!


If you have suggestions for improvements, please let me know. I am maintaining this wrapper and am happy to have input from users.


Hi @Christian_Tischer,

It’s working pretty well! It’d be nice to be able to specify where to save transformed images, rather than them having to be placed in the temporary working directory. Also, when working with composite images is there an option somewhere to have it rebuild the composite after transformation? At the moment it’ll open each transformed channel in ImageJ seperately. It’s not too much trouble to write a script to overlay the channels again and then save them with an appropriate location/filename, but I’m wondering if there’s a better solution.



Kind of curious about this as I might know someone who would benefit, but what size images are “large” for you? File size, pixel count?

1 Like


I suppose it does depend on a personal definition of ‘large’. I’d consider ‘large’ to be a single image which starts to challenge ImageJ’s default memory or dimension limits. In this case the images are >30,000 x 30,000px, although for testing I’m using scaled-down versions.

1 Like

Hmm, I just recently became interested in this again, and I’m guessing the 80-100k px per side is going to make things difficult for FIJI associated plugins. Maybe elastix can be used on whole slide images directly? Will be looking into it, didn’t see a size limitation, but I haven’t searched that extensively yet.

I think in Fiji this could be a problem. But you can also use Elastix directly on the command line or from python. I do not know what the limitations are there. You can ask here, I think:!categories/elastix-imageregistration/elastix


You’re right, there’s a known issue in ImageJ where it simply can’t open images above ~27000x27000 pixels properly. I expect running elastix from the command line should work, but I’d anticipate that on such a large image it may take a few hours to process depending on the settings used.

If you’d like to scale it turns out that IrfanView has no trouble handling oversized images and scaling them down to something ImageJ can handle.


Thanks, that was about what I figured. Scaling down isn’t going to work, unfortunately, as there is a good chance we will need to do some deformation (so no snagging the low res affine) to get approximately cell to cell alignment. It may just be a slow, grinding process. But if elastix can get the job done, in the end it is just computing time. Potentially cloud computing time at scale. Maybe.

1 Like

Come to think of it, I don’t think the ImageJ wrapper actually needs to open the images within ImageJ itself. I think it essentially just sends the command to the external program and there’s an option to export the result rather than opening it in ImageJ, so you might be able to just use the plugin anyway on the full size images.

1 Like


Apologies if you know everything below already.

The following is possible (and easy), as long as the metadata for your images are correct.

  1. Downsample an image (making sure the pixel spacing in the metadata is correct relative to the original)
  2. Register that lo-res image to some target (also possibly low-res), resulting in some transform (T)
  3. Apply exactly that transform (T) to the high res image. .

The result of (3) will be what you would want (when using elastix and other good registration libraries).

This will be a problem only if there is high spatial frequency deformation necessary. In every task I’ve come across, this kind of deformation is exactly what I want to avoid. The point being, I find it really useful to go through the above process since (in my experience) using the highest-res original images is computationally wasteful (in the best case), and sometimes gives worse results.

You’re probably right (@Christian_Tischer ?) , though it may not be in the case that the file is in a format that does not play nicely with ITK (see this thread)


1 Like

Thanks! And ideally what you described first is exactly how we would want to proceed, but I don’t think it will give accurate cell-cell information due to problems with the slide scanning (stitching artifacts), slight stretching and sliding of parts of the tissue, etc. If you have been able to get single cell accuracy on overlays using downsampled alignment, that is great news, and hopefully we will see similar!

With practice much of those pre-analysis problems will be mitigated, but who knows. The plan is to give downsampling a try first anyway and seeing how the results look due to there not being a quick and easy way of accomplishing the high res alignment that would be preferable.

The objective is multiplex cell by cell quantification of whole slide images using strip and restain methods. That will require very precise alignment of not so perfect images, though the overall tissue structure shouuuuuuuld be the same. Though it won’t be :slight_smile:

I have seen this work and accomplished it in Visiopharm, which I have access to, but Visiopharm doesn’t play nice with outputting the aligned images for analysis in other software which makes that usefulness… limited. Plus, figuring out an entirely open source pipeline has it’s own benefits.

1 Like

You know your data better than I do, but if its relatively low effort, then I’d say its worth a try.

:+1: :+1:


Would love to hear how things work out, please keep us posted!

1 Like

I’d love to hear too if anything has happened on this front in the year that’s passed. Since getting the linear pipeline in QuPath up on whole scanned slides (thanks to @Research_Associate and @petebankhead) I’ve been looking at the results and the residual misalignment after applying a rigid transformation (to the same restained and rescanned slide). The distortion looks nonlinear, but very low frequency. In principle doing a low resolution nonlinear alignment based on the tissue outline even without a shared stain doesn’t seem too hard (I assume this is what Visiopharm’s solution is doing, have been thinking about demoing it) In practice I imagine there are significant purely engineering gotchas to doing it on whole slides. Is there an open source process that is working for anyone?


I don’t have anything, as we resorted to a more general stain based analysis rather than true cellular mutiplex classifications. That worked well enough with rigid alignment.

I think @smcardle was working on something with warps and using that information back in QuPath, but I am not sure that is applicable to whole slide->whole slide or just maps->whole slides. Or if it is even in a sharable state.

1 Like

I encountered a similar problem, but I don’t know if my solution will help you. It uses ImageJ’s bUnwarpJ to do the elastic registration of downsampled whole slides based on tissue outline, then transforms the annotation or detection objects from one image and adds them to another. It does not do true image alignment where you can overlay the stains on top of each other and visualize them or train a classifier off of both of the channels simultaneously. And the final objects are not exactly the same as the starting objects. But, for my purposes, it was good enough to see if objects created in 2 different images from 2 different markers overlap enough to call them colocalized.

The procedure is not as automated as I would like, but here goes:

  1. In QuPath, duplicate whatever will be your Image1. Delete all of the objects except the tissue outline. Make sure you are zoomed to fit
    Deselect all objects, the send the whole image to ImageJ (through Send region to ImageJ), downsampled to a known pixel size. A smaller pixel size is better, of course, but ImageJ will have to keep a few of these full slide images in the RAM, so be careful. Check “Include overlay”.
    Then, in ImageJ, rename the image “Image1”.

  2. Duplicate Image1 again, then delete everything except the objects that you are trying to transform. Repeat the process to send to ImageJ, but name the image “Objects”.

  3. Duplicate Image2, deleting everything except the tissue outline. Repeat the steps above (naming it Image2, of course). If Image 2 has a different resolution because it was taken on a different scope or with different settings, that’s fine, but make sure that the images that are sent to ImageJ have matching pixel sizes.

  4. In QuPath, open the version of Image2 that you want to add the transformed annotations into.

  5. Run this script. This is an ImageJ macro written in the IJ1 macro language. You’ll have to adjust the paths for saving files, the bUnwarpJ settings in line 29, and potentially the threshold in line 35. The objects will appear in whichever file is open in QuPath as a single, merged annotation object.


if (roiManager("count")>0){

run("Create Mask");
rename("Image1 Mask");

run("Create Mask");
rename("Image2 Mask");

run("Create Mask");
rename("Objects Mask");

selectWindow("Image2 Mask");
getDimensions(width, height, channels, slices, frames);

selectWindow("Image1 Mask");
run("Canvas Size...", "width="+width+" height="+height+" position=Top-Left zero");
run("bUnwarpJ", "source_image=[Image1 Mask] target_image=[Image2 Mask] registration=Accurate image_subsample_factor=0 initial_deformation=[Very Coarse] final_deformation=[Very Fine] divergence_weight=0 curl_weight=0 landmark_weight=0 image_weight=1 consistency_weight=10 stop_threshold=0.01 save_transformations save_direct_transformation=["+transformFolder1+"Image1 Mask_direct_transf.txt] save_inverse_transformation=["+transformFolder1+"Image2 Mask_inverse_transf.txt]");
selectWindow("Objects Mask");
run("Canvas Size...", "width="+width+" height="+height+" position=Top-Left zero");
call("bunwarpj.bUnwarpJ_.loadElasticTransform", transformFolder1+"Image1 Mask_direct_transf.txt", "Image2 Mask", "Objects Mask");

setThreshold(15.9000, 1000000000000000000000000000000.0000);
 //set this threshold
setOption("BlackBackground", true);
run("Convert to Mask");
run("Analyze Particles...", "add");

run("Send ROI to QuPath");

What this script does is:
a) Convert all of the overlays into binary images
b) Matches the canvas sizes of the different images
c) Runs bUnwarpJ and saves the transforms.
d) Applies the transform to the binary Objects image. This creates a grayscale image with the objects transformed but their edges slightly blurred.
e) Uses Analyze Particles to re-find the objects
f) Pushes them back to QuPath into the open image.

Since this is all happening in downsampled space, if your objects are very close to each other, they may get merged in this process. Also, you lose identifying information / object names so tracking them may be tricky. But, it’s enough to get the equivalent of a low resolution spline alignment, all with open source software.


This looks cool, thanks, and I’ll give it a try, it seems to address the downstream need for analysis. An image users can visualize/play with easily would be awesome (and is what I’m being asked for :slight_smile: ) but maybe isn’t strictly necessary…

1 Like

Hi David,

I recently built a set of script to perform batch image alignment

In my case, I had 20 tumours, with 3 serial sections taken from each and stained with multiple immunofluorescent markers. By separating the tumour name (SlideID) and serial section name by an underscore, the script calculate_transforms.groovy would perform fully-automated batch image alignment based on intensity (or area annotations if available).

Hopefully that’s of some use!

1 Like