Registering images from multiple cameras

Hello friendly image analysts!

I’m using a microscope that relies on 3 cameras to collect simultaneously in each of 3 different channels. We’ve attempted to align these cameras as well as possible, but obviously it’s very difficult (nigh impossible) to achieve pixel perfect alignment. Using 200nM tetraspeck beads, I have collected an image with landmarks visible across each channel, and hope to use this image for alignment (files available through link at bottom of post).

The workflow I would like to use is:

  1. Record alignment image
  2. Conduct my other imaging
  3. Calculate alignment from alignment image
  4. Apply to all images

Using the Descriptor-based registration (2d/3d) I can accurately calculate the transformation between, for example, channel 0 and channel 1. I can return the transformation (e.g. [3,3](AffineTransform[[0.999970929479262, 0.007624971893748, -11.695266931561417], [-0.007624971893748, 0.999970929479262, -16.720713275268054]]) 1.654253271494951) from this as well. But I have no easy way to apply this transformation to a fresh stack. Is there something I’m missing here?

I contacted Stefan Priebisch (author of SPIM Registration and this Descriptor-based registration plugin) for advice, and he suggested to use their new tool suite BigStitcher. As it seems BigStitcher is mostly designed for orthogonal views or image stitching, it’s not immediately clear to me how to conduct channel registration, and using the method that seemed most clear (Multiview -> Detect Interest Points) pulls up the same display as Descriptor-based registration, but fails to detect any points even with the same settings as used successfully above.

Any recommendations for a scriptable protocol for doing what I’m aiming for?


Alignment images:
Channel 0
Channel 1
Channel 2

1 Like

Bumping for the monday crowd - anyone have advice for using bead images to register live-cell microscopy in a programatic manner?

Hi @weinberz,

Welcome to the forum, and sorry for missing your post until just now.

The situation was not great, you’re right. I wrote this script to help address this (see also this forum thread.)

If you give the script this part:

[3,3](AffineTransform[[0.999970929479262, 0.007624971893748, -11.695266931561417], [-0.007624971893748, 0.999970929479262, -16.720713275268054]]) 

of your output, then the script will (hopefully) apply it to the open image.
Actually, after double-checking, I was lazy at the writing of that script and think its may be 2d only.

Are your data 3d? If so I can quickly modify it. Or if that script isn’t working for some unknown reason, please post back and I’ll help troubleshoot…

This is surprising, not sure why that would happen, but @hoerldavid may know.



Thanks @bogovicj, this looks like exactly what I need! Any chance you’ve figured out how to extract that transformation information from Descriptor-based registration so that I wouldn’t need to copy and paste? I’ve never used groovy for scripting before - I’m guessing the solution would be basically importing ij.IJ, using the getLog method, and parsing the contents after running the Descriptor-based registration - should be a good learning opportunity for me :slight_smile:

As for the struggles I’ve been having with BigStitcher, I think it’s an environment problem -trying to find a setup that works. Unfortunately, installing BigStitcher from Fiji’s update manager breaks Descriptor-based registration (looks like a known issue), so I’ll try your solution for the time being.


1 Like

Hi @bogovicj,

Sorry to bother! Totally new to working this close to the ImageJ metal (mostly just familiar with macros).

I’ve got the code required to pull the transformation from the log:

logged_text = IJ.getLog().replaceAll('\n$','');
affine_string = logged_text.substring(logged_text.lastIndexOf("\n")).trim();

but I’m running into trouble trying to apply this to an image stack. The data I’m attempting to use here are 3 dimensions (XY, and time) so I’d like to apply the same transformation to each frame over time.

More than happy to learn some groovy here, but I can’t find a good resource for manipulating a currently open stack of images. Any chance you could point me in the right direction?

Thanks again!

1 Like

Hi winberz,

I would like to introduce my plugin CoordinateShift.
Please see the introduction movie in the linked site.
However, this plugin dose not have affine transformation function.
Although, there is a function that can reuse the shifted coordinate data.
This plugin can shift HyperStack image along t-axis.(ch-axis and z-axis will be shifted by same coordinate)
So how about converting it as follows and using it.
channel -> t-axis, t-axis -> z-axis

*When I tried to correct your image, it worked well by focusing on a part using the ROI.


Hi @weinberz,

Nice job getting that far!

Here’s a helper for looping over 3d subsets of a 4d volume (where the 4th dimension is time ). A more complete example with context is below:

import net.imglib2.view.Views;

timeDimension = 3;
for (i in (0..<numTimepoints)) {
	volumeAtTimeFrame = Views.hyperSlice( img, timeDimension, i );

Don’t feel compelled to stick to groovy if it turns out to be slowing you down. Small exception to that is that I would recommend moving a way from the imagej macro language if possible since other languages tend to be more flexible. But if, you prefer python, say, then by all means stick to it.

What you may find trickier than moving to groovy is switching from the ImageJ data model (ImagePlus) to the ImageJ2/Imglib2 data model Dataset.

This forum is a great place to get help for both those topics, but I’ll link to some resources I found helpful here, just in case.


Full script below:

// @Dataset img
// @UIService ui

import net.imglib2.view.Views;

numTimepoints = img.getFrames()
// find the time dimension (if we know it has units of "sec"s)
timeDimension = -1;
for( d in 0..< img.numDimensions()){
	if (img.axis( d ).unit().equals("sec")){ timeDimension = d; }

for (i in (0..<numTimepoints)) {
	volumeAtTimeFrame = Views.hyperSlice( img, timeDimension, i );

// display the last time point volumeAtTimeFrame )


@hwada: thanks for the suggestion! Given the misalignment I think I require AffineTransform though :frowning:

@bogovicj: thanks for the links and sample code! You’re absolutely right - adjusting to Dataset over ImagePlus (also figuring out how to handle RandomAccessibles to actually modify images) is definitely the challenge for me!

I’ve managed to get a working script that, given a transform from Descriptor based registration and a single open stack, can transform all images in the stack:

#@Dataset img
#@String(label="affine transform string") affineString
#@UIService ui

// original:

import ij.IJ;

import net.imglib2.img.*;
import net.imglib2.realtransform.*;
import net.imglib2.view.*;
import net.imglib2.img.ImagePlusAdapter;
import net.imglib2.interpolation.*;
import net.imglib2.interpolation.randomaccess.NLinearInterpolatorFactory;

nd = 3
if( affineString.startsWith( "[3,3]" )){
	println( "2" )
	nd = 2
} else if( affineString.startsWith( "[4,4]" )){ 

paramString = affineString.replace( "[3,3](AffineTransform", "" ).replaceAll( "\\) .*", "" ).replaceAll( "[\\]\\[\\s]", "" )
paramDouble = []
for ( p in paramString.split(",") ) {
	paramDouble.add( Double.parseDouble( p ));

AffineTransform2D transform = new AffineTransform2D();
transform.set( paramDouble as double[] )
println( transform )
interp = new NLinearInterpolatorFactory()

numTimepoints = img.getDepth()
timeDimension = 2;

transformed_stack = img.factory().create(img);
transformed_cursor = transformed_stack.cursor();

for (i in (0..<numTimepoints)) {
	volumeAtTimeFrame = Views.hyperSlice( img, timeDimension, i );
	result = Views.interval( 
						Views.interpolate( Views.extendZero( volumeAtTimeFrame ), interp),
						transform )),
	result_cursor = result.cursor();
	while (result_cursor.hasNext()) {

This works reliably and definitely allows me to achieve what I’m looking for. That said, the cursor-based method for copying the transformed original into my resulting stack seems pretty slow (2^20 * numTimepoints, linear speed). Is there a vectorized way to accomplish this, where I could essentially copy a whole frame at a time during my transform?

Thanks again! You’ve been so immensely helpful!!


Nice! Happy to see you got it working!

Are you more interested in writing the output to disk? Or looking at / analysing the result?

If the former, then I’d say parallelization (multithreading) is the way to go. If the latter, I’d suggest not even doing the copying, i.e.:

timePt_list =[]
for (i in (0..<numTimepoints)) {
   result = ... // same as you have
   timePt_list.add( result );
} Views.stack( timePt_list) )

In this case, the transformation is only applied to the timepoint currently being viewed (on-the-fly).
Whether this is practical depends on the details of your images (they sound big).

I regularly apply transformations in parallel, so if you want to go for that, I’ll track down and post the code snippet.


1 Like

Thanks @bogovicj! I’d like to view and then eventually save the transformed images so any parallelization examples you can share would be much appreciated. The image stacks I’ll be handling with this code are 1024x1024 and have upwards of 3000 frames that need to be viewed rapidly, so on-the-fly transformations don’t seem feasible in this context unfortunately.

I’m a bit shocked that there’s not a whole frame (instead of pixel-by-pixel) method for copying images? It’s probably just that I’m used to MATLAB and numpy-based image manipulation, but I’d assumed that I should just be able to copy the entire array that is the transformed pixel data into the fresh stack. I’m thinking something like (excuse the pseudo-code):

# start with img, a 1024x1024x15 matrix
new_img = array(img.dimensions)
for i in 0..img.dimenions(2):
    transformed_image = some_func(img(:,:,i))
    new_img(:,:,i) = transformed_image

I’m guessing this doesn’t work because of how imglib2 addresses images (i.e. as RandomAccessibles instead of matrices)?

1 Like

Hi @weinberz,

The big reason that the copy operation is not easy to do in imglib2 is that the philosophy is to avoid copying whenever possible, because:

  1. Copying is dangerous if your images don’t (or barely) fit in memory.
  2. It incurs an overhead cost.

Copying shouldn’t cost (much) more compute time in imglib2 than in Matlab or python, but it does cost more lines of code (using a cursor).

See this thread for a more concise way of copying stuff (and looping in general) in imglib2


1 Like

Edit, even given the above, there are times when copying makes sense, and yours is one of them.

Thanks again for all your help @bogovicj and @hwada. For anyone stumbling on this thread in the future, I was working on turning @bogovicj’s script into a a slightly more robust plugin but in the process discovered the NanoJ-Core plugin, which has multi-channel registration built in and is trivially easy to use.

Hope that helps someone!

1 Like