Registering images from different modalities

Hi all,
I’m looking for a way for non-programming end-users to overlay and align 2 images from very different modalities (electron microscopy and digital pathology). Both are very large extent tiled/pyramided images that don’t easily read into ImageJ/Fiji at full resolution. The EM image is an OME-TIFF that reads fine into Qupath and so does the SVS format slide scan file. I’m looking to do the registration interactively (and visually) using common recognizable structures in the 2 images. However, I don’t see an obvious way in Qupath to scale the images interactively to make them match in scale and the translate and rotate options in the “Interactive image alignment” tool don’t seem to work as I was hoping. Does anyone know of a way to do this either in Qupath (preferred because it does so many other things so well) or another package?
Thanks,
Damir

Not sure if it is interactive enough, but using:


The scaling parts of the affine matrix, you should be able to scale the images together. It is not smooth like scrolling a mousewheel, but you can iterate and click the Update button.

1.0 1.0 being no scaling in either the X or Y direction.

Thanks @Research_Associate
While maybe not exactly user friendly, let me see if the users can deal with that. I’ll put in a feature request for an interactive scaling add to this tool to make it more friendly. Whenever that bubbles up on Pete’s radar.

But one more question: once I’m happy with the registration inside the tool, is there a way to “bake in” that transform and even merge the 2 images so the transformed image becomes (an) additional channel(s) in the underlying EM image? After that I would have full access to all the great Qupath tools. I’m currently at this step (the blue image is from somewhere but the real toluidine-blue image would look similar):

Thanks!
Damir

1 Like

I’ve been trying to suppress my radar when it comes to registration, but it’s there :slight_smile:
The interactive alignment command is very neglected.

Possibly… but sadly not very well if you have a combination of image/channel types (e.g. RGB + anything else). It is really intended for concatenating fluorescence channels, or brightfield images that are color deconvolved.

This script is the most relevant that I can recall writing: Merge images along the channels dimension in QuPath v0.2.0 · GitHub

I’m especially interested in this topic because it’s grant-writing time, and one of the things I’ve been thinking about is the need to handle images from multiple modalities. I think QuPath’s current design only enables this in a very awkward and limited way, and doing it properly would require a lot more work. But my guess is that this work would improve the flexibility of the software a lot. I’m very interested in learning more about applications where this would be useful.

1 Like

Can confirm that it “can work” for cross modality images (H&E+IF staining, for example) as long as you choose the brightfield image first, and the brightfield image is the larger of the two after resizing (anything not in the range of the brightfield server? will be clipped off in the resulting image).

And, of course, accept that you are working off of brightfield deconvolved channels!

1 Like

Just want to throw in here that @phaub’s newest script to apply LUT’s to specific channels might be very useful in the case of EM images - I think. I am not experienced with them at all, but it seems like “white” is the background, so you would want an inverted LUT once you turn it into an “IF channel.”

I might also be off my rocker.

1 Like

You should be able to get an inverted LUT by making the max value lower than the min value in the brightness/contrast dialog.

2 Likes

Huh, that is even easier. Done that all the time now for Measurement Maps, but never occurred to me to try for Brightness and Contrast. :man_facepalming:

Thanks @petebankhead and @Research_Associate. Sounds like I should be able to cobble something together with some scripts. So deconvolve the toluidine-blue RGB image into its components, and then use a version of the “Merge image along …” script (after somehow picking up the desired affine transform parameters from the Interactive image alignment module) to add the EM single-channel image to it, and possibly invert the LUT of the EM channel.

And @Pete, for your grant-writing: our EM folks are more and more doing correlative microscopy where the structure comes from the EM and the function (i.e. labeled biomarkers) comes from fluorescence or IHC. With the ability of modern-day SEM microscopes (we have both FEI/Thermo and Zeiss) to tile and stitch over relatively large slabs of tissue, the XY extent of such a large tiled EM image is similar to the XY extent of a light microscopy image. Both vendors provide proprietary software that can do this type of merging of the info from the different modalities (MAPS from FEI: Maps, for automated acquisition of high resolution images from large areas. | Thermo Fisher Scientific, and Zeiss’s CONNECT: ZEN Connect - overlay and organize images to connect multimodal data) but I don’t have to tell anyone on this forum the downsides of being stuck inside a proprietary environment. We are developing segmentation methods to delineate structures from the EM images (see: https://www.biorxiv.org/content/10.1101/675371v1) and want to correlate those to the identified structures in the light microscopy images (https://www.cambridge.org/core/journals/microscopy-today/article/simple-methods-to-correlate-light-and-scanning-electron-microscopy/0190488B314CCD334D87BCDAD19DF03F). In relevant tissues (in tumor biology and most other areas), all of this needs to be done over very large images and not much software exists that readily handles those. Qupath happens to be one that can. I’ll be happy to provide more info if needed to support what you’re interested in proposing.

Cheers.

2 Likes

I imagine there will really only be one exactly correct value for this per pair of imaging modalities - so it is probably technically better if it is only inputted as a number - the ratio of the pixel sizes. Zooming in and out manually will likely result in being “off” by a tick, or half a tick - introducing unnecessary error in any measurements or objects created.

This should be handled by the script automatically - though there are places you can play around if you ONLY want certain channels rather than all three. Which I have seen used when doing things like combining many different brightfield images.

I use
https://imagejdocu.tudor.lu/plugin/aligning/align_slice/start

get the images in a stack
make a selection (a line, box, poly…etd.) in image 1 identifying the structure of interest
switch to image 2
run the plugin Align Slice
move rotate and scale the second image’s structure of interest to coincide the selection
flip back and forth between the aligned images with the arrow keys

Thanks @rondespain . Unfortunately these images are all way too large at full resolution for ImageJ/Fiji. The EM image tends to be > 100K pixels along each axis and the accompanying slide scan image isn’t much smaller. Trying to read that into ImageJ/Fiji gives an immediate err:

(Fiji Is Just) ImageJ 2.1.0/1.53c; Java 1.8.0_172 [64-bit]; Windows 10 10.0; 278MB of 48983MB (<1%)

java.lang.IllegalArgumentException: Array size too large: 133120 x 125952 x 1
at loci.common.DataTools.safeMultiply32(DataTools.java:1286)
at loci.common.DataTools.allocate(DataTools.java:1259)
at loci.formats.ChannelSeparator.openBytes(ChannelSeparator.java:160)
at loci.formats.ReaderWrapper.openBytes(ReaderWrapper.java:334)
at loci.formats.DimensionSwapper.openBytes(DimensionSwapper.java:233)

And we need to do this “merging” at the high res in order to be able to zoom into the detail of the EM image.

Thanks for your suggestion.
Damir

1 Like

Yes, @Research_Associate , that’s a good point. In fact, I’m now looking to extract the actual pixel size info from each of the 2 images’ metadata so my modified script can set the scale value automatically from that.

1 Like

Usually images taken by different instruments suffer elastic deformation between the collections. If your images weren’t taken simultaneously, or the image material isn’t completely rigid then you won’t expect a good alignment anyway. Registration of the low rez images might be usable when scaled to the hi rez images in the pyramid. Looking at your past posts I see that you are having trouble extracting the tiles in the pyramids you have. No access to those tiles make processing a problem. If you can’t extract the layers you can’t do any processing.
Good luck

I took the liberty of extending Pete’s ‘Interactive image alignment’. Maybe this will help on the way to the solution.

Have a look here

1 Like

Hi @dsudar

Just in case you’re not familiar, I and others use Bigwarp for registration of very large images (conflict of interest alert, I wrote and maintain Bigwarp). Maybe it’ll help you, maybe it won’t, but thought I’d poke in case you’re not familiar.

John

P.S. @NicoKiaru is doing great work toward making bigwarp and qupath play nicely together :smiley:

5 Likes

Thanks all for the great responses.
@bogovicj : Bigwarp looks like a great tool with a lot of capability and complexity. For my current purpose, I need something that can easily be run by end users with a simple UI.
@phaub : That is absolutely amazing and I’m very grateful. ImageCombiner is exactly what I need for this purpose.
Cheers,
Damir

4 Likes

@dsudar Not sure if the update of the ImageCombiner is of interest in your use case: