Interactive image alignment

I see, do you know if there is a scripting mechanism to set these values?

There wouldn’t be at this point since that is working in the opposite direction of the intended information flow. You could always try making a feature request on the Github issues page, but that would be a different sort of function entirely. Maybe shift plus mousewheel to allow you to zoom the background image?

This skips the interactive GUI and goes straight to the overlay that’s actually being used:

import qupath.lib.gui.align.ImageServerOverlay
import javafx.scene.transform.Affine

// See
def affine = new Affine(1, 0, 10000, 0, 1, 5000)

// Get viewer & server - you'll probably want to get a server from somewhere else...
def viewer = getCurrentViewer()
def server = getCurrentServer()

// Create an overlay
def overlay = new ImageServerOverlay(viewer, server, affine)

// You can use overlay.getAffine() and make adjustments directly to the Affine object if needed

// Use set rather than add because we might need to run it a few times to get it right...

I haven’t really looked at the code for this in a long time - it could change a bit when I finally return to it.


Is ImageServerOverlay used only for the alignment of the two images in the current alignment tool? Or is it used elsewhere as well? Wondering how badly I could abuse this for visual functionality :slight_smile: Is there a limit to how many overlays we could load?

It’s intended to be more generally useful. You don’t need to specify an Affine transform if you just want to add another overlay on top.

There are a few different overlay options. If you want to overlay a ‘small’ image over all or part of the image, having it dynamically resized as required, then a BufferedImageOverlay is a better choice.

This could (for example) be used to show a low-resolution heatmap generated by a Python script in the context of the whole slide image. Here’s a fairly pointless demo:

import qupath.lib.gui.viewer.overlays.BufferedImageOverlay
import qupath.lib.regions.RegionRequest
import javafx.scene.transform.Affine

// Get viewer & server - you'll probably want to get a server from somewhere else...
def viewer = getCurrentViewer()
def server = viewer.getServer()

// Extract a low-resolution ImagePlus for the current ROI (bounding box) & invert it
def roi = getSelectedROI()
def downsample = 8.0
def request = RegionRequest.createInstance(server.getPath(), downsample, roi)
def imp = IJTools.convertToImagePlus(server, request).getImage(), "Invert", "")
def img = imp.getBufferedImage()

// Show the resulting image on top
def overlay = new BufferedImageOverlay(viewer, request, img)

// Use set rather than add because we might need to run it a few times to get it right...

It’s all a work in progress though and subject to change… don’t think there’s any inherent limit to the number of overlays, but can’t promise it all works very smoothly if you try adding a lot.


That looks suuuper useful for a few people who wanted to do some things with images I didn’t think were possible in QuPath. Was doing some terrible TMA downsampling, extraction of DAB “channels” and creating quasifluorescent overlay images in FIJI.

1 Like

This works for me, thanks! However, the Brightness and contrast settings seem to be set to the viewer image and sometimes these aren’t appropriate for the server image. For instance, one of the fluorescence images becomes so faint it’s barely visible when overlaid against a brightfield image but I can go the other way quite well. Might there by a way to load the B&C dialog for the overlaid image and adjust after overlay or at least keep the settings as they are when looking at the images individually?

Again, thanks a lot. I’ll keep exploring on my own too

I can’t see a straightforward way to do it without changing the code… the final null passed here is where the ‘renderer’ should be, which applies the brightness/contrast settings.

In this early form it is really only for brightfield/RGB images where the default settings are generally ok and if it works for fluorescence that’s more by luck than design.

The whole way brightness/contrast is handled in QuPath, along with color transforms, has evolved into something rather messy and unpleasant. It needs a more thorough revision one day… not just for this purpose, but also to make color transforms accessible in the pixel classifier and to avoid recalculating histograms when images are opened (which is what makes opening many images slower than it ought to be). It’s also somewhere on the todo list… in the meantime I’ll report back if I come up with a short term solution.

1 Like

This may be a naive question, but how do I set the viewer and server in the script? Can I access it by image name? I see getCurrentViewer() in these examples and the server, but can I instantiate these classes besides just grabbing what is currently displayed? Thanks again.

I’m not on a computer right now, and so this may be of quite limited use, but you can instantiate an ImageServer either directly from a URI with a static method in the ImageServerProvider class or (in m3 at least) via a ProjectImageEntry (which you can get through getProject().getImageList().

The second way is preferable for consistency & because ImageServers might not be fully represented by a URI only (eg identifying a specific series within a file using BioFormats).

Not sure why you’d want to instantiate a viewer rather than use the one within the main QuPath window…?

1 Like

Is it possible to access the channel display information as in getImageDisplay() for an image that isn’t open? I’ve been getting a viewer instance through ‘getCurrentViewer()’ but I’d like to get display information for an image that isn’t currently open in the viewer but is loaded into the project. I figured out how to build the imageServer for this image, but I need to access display information for processing before overlay…

Thanks so much again.

The ImageDisplay isn’t really a property of the image, and is only applied as the image is rendered; in fact, in implements an ImageRenderer interface that transforms the ‘raw’ image (which could have any color model set or none…) into something that can be drawn as an RGB image.

The ‘design’ has evolved rather oddly, not least thanks to my very incomplete understanding of the intricacies of Java BufferedImages, Rasters, ColorModels, SampleModels and so on when I started… which is why it needs a revision so badly. I’d very much like to simplify things, and it could be that better use of ColorModels or BufferedImageOps might make some of the custom QuPath stuff unnecessary. So I don’t really want to become even more dependent on the current design if possible.

Still, to try to make what you want to do possible in the short term I’ve just added a private field to the ImageServerOverlay class.

It is null by default and there are no setters or getters, but Groovy’s lax attitude to privacy means you can easily set the value in a script. So in m4 (when available) / if you build the code in my fork, you’ll be able to do this in the script above:

// Create an overlay
def overlay = new ImageServerOverlay(viewer, server, affine)
overlay.renderer = viewer.getImageDisplay()

Now the brightness/contrast settings will be bound to the current display settings. Potentially you could set a completely different renderer if you need one.

I didn’t add setters and getters because I’m not sure this is really a good way to handle the situation, but it could help you in the short term at least.

1 Like

I see. I’m going to try this. Since my particular need matches the second scenario (a low res image overlaid to the high res), I was sending the low-res image to ImageJ, then sending it back as a bufferedImageOverlay. The issue I encountered was when I sent the image, it sent the whole thing, with no display information, so I was trying to collect selected channels and display information from QuPath and set up all that processing before sending back to the BufferImageOverlayServer. However, I could only get that display information by getting the getCurrentViewer().getImageDisplay() but I would like to access it without selecting this image in the QP GUI. Here is an incomplete code snippet. If I could get the getImageDisplay() information without having to pull it from the getCurrentViewer() I think it would work better. I see this information is stored in the .qpdata for the image but I don’t know how to access that through groovy.

selected_channels = getCurrentViewer().getImageDisplay().selectedChannels()
available_channels = getCurrentViewer().getImageDisplay().availableChannels()

channel_names = []
channel_indices = []
channel_mins = []
channel_maxs = []

selected_channels.each{ channelName ->
    index = available_channels.findIndexOf{it == channelName};
    channel_indices.add(index + 1) //match IJ indexing

def imp = IJTools.convertToImagePlus(server, request).getImage()"Make Substack...", "channels="+channel_indices.toString());
names = WindowManager.getImageTitles()
imp2_substack = WindowManager.getImage(names[1])

def img = imp2_substack.getBufferedImage()

Hmmm, as far as I can tell imp2_substack.getBufferedImage() will create an 8-bit RGB image based upon the current display settings in ImageJ - regardless of how the image started out. Therefore the raw data for a multichannel image would be lost at that point. Not sure if this matters or not.

This commit I made 4 days ago may already help a bit if you’re sending the region to ImageJ through the GUI:
But it won’t turn off channels, which I presume you’re doing with make substack.

I’ve tried to run the script and it works fine. However, I was wondering whether there is a way to apply the interactive image alingment (experimental) transformation directly from the window (not through the script) as I would like to perform this transformation in bulk for at least 9 slides per ROI. Therefore, scripting each one of the transformations seems more difficult, than applying the changes directly.
Is there an option to make it work?
Or a script that could be used to run the changes in the matrix defined through the tool?
Thank you!

Dear all,
I have tried to use ‘interactive image alignment’ in m9 version.
It seems erroring using the previous script running after I got matrix copied in.
The error I have got is as below.

ERROR: It looks like you have tried to import a class 'qupath.lib.gui.helpers.DisplayHelpers' that doesn't exist!
You should probably remove the broken import statement in your script (around line 2).
Then you may want to check 'Run -> Include default imports' is selected, or alternatively add 
    import qupath.lib.gui.dialogs.Dialogs
at the start of the script. Full error message below;
***ERROR: MultipleCompilationErrorsException at line 1: startup failed:***
***Script31.groovy: 2: unable to resolve class qupath.lib.gui.helpers.DisplayHelpers***
*** @ line 2, column 1.***
***   import qupath.lib.gui.helpers.DisplayHelpers***
***   ^***

If you can fix this, it would be great!
Thanks a lot,

There are a few scripts included in the link below that might be useful. Based of of Pete’s script earlier in this thread.

@inhwa I’ve edited your post to make the code/error more readable, which really means adding ``` at the top and bottom of the sections (if you click to edit the post again yourself and switch to ‘raw’ display you’ll see the change).

Anyhow, it looks like the script you ran isn’t exactly the same as the script quoted above. The error message refers to the fact that the script you ran must contain the following line:

import qupath.lib.gui.helpers.DisplayHelpers

This line can be removed in m9. Also, in m9 any time you see DisplayHelpers you should replace that with Dialogs.

I made the change because I think Dialogs is a more meaningful name, which will make it easier to find in the future. If you run a one-line script


you’ll see that Dialogs gives access to lots of methods to show various prompts and dialog boxes in QuPath.


In case anyone else is interested in alignment scripts, I have written up a guide using some scripts to transfer objects around multiple images utilizing stored affine transformations. That makes multiple transfers or repeated transfers a bit smoother.


Thank you so much for your help!