Does transformation matrix from Register Virtual Stack work with TransformJ plugin?

Hi,

I would like to calculate an affine transformation matrix from bead images (four images/channels) and apply it to many images (thousands). I used Register Virtual Stack plugin and saved the tranformation parameters as .xml. These can be applies to the same images or other 4 images.
The problem is that the Transform Virtual Stack plugin allows only to have the same number of transformation files in a folder as there are images in another image folder. I dont want to copy thousands of image into batches of four images into folders and thought I could use the TransformJ plugin.

There the problem is that the matrix loaded from TransformJ is a .txt file (tab, comma or space separated) and the matrix look a bit different (e.g. z-dimension). Anyway, I read the documentation of the two plugins and I can transform the parameters from .xml into a matrix from TransformJ, but for some reason the affine TransformJ image is not fully corrected…

Does anyone know, if in general it should be possible to use the parameters derived from Register Virtual Stack (RVS) and apply them within the TransformJ plugin? I thought an affine transformation suppose to be the same math, isnt it?

I realized that the RVS model is using the AffineModel2D and the TranslationModel2D in the xml. Does anyone know why (since translation is just one part of affine transformation)? For most of the xml the TranslationModel2D is just [0,0], but others are not. I found with some test images, that if I combine/add the TranslationModel2D parameters to the translation parameters of the AffineModel2D, then the model makes sense and is consistent for similar image. So, I dont think that this actually the problem, and I also see insufficient correction in cases where TranslationModel2D is [0,0].

I could also provide image to reproduce the problem I have. Unfortunately I cannot write Java plugins, then I guess one could write a plugin that loades one transformation file and applies it to the current image!?

Any help is very much appreciated! Thanks in advance!

All the best for 2017,

Martin

Hello Martin,

Sorry for the late answer. I just saw your post here.

If the transformations are calculated relative to the previous image, the extra translation is used to bring them all to the same origin.

Yes, no worries. I’m going to add public methods to do that and I will send you a script so you can use it.

1 Like

Hi Ignacio,

many thanks for your reply.

Ok, I thought that the affine model inherits already translations and that all transformations are calculated relative to the first image. Anyway, I found that combining the for translations will give the expected results.

Well that would be great. Ideally there would be a pluging what loads a registration model and applies it on the current image. The rest I could do in the ImageJ macro language, thats no problem for me.

Thanks you very much!!!

Best,

Martin

1 Like

OK, I just released a new version of the plugin with methods to read any transform from file and apply it to an image.

Here you are an example of a BeanShell script that calls those methods:

import register_virtual_stack.Transform_Virtual_Stack_MT;

// array to store the world coordinates of the origin of the transformed image
worldOrigin = new int[ 2 ];
// read transform (XML)
transform = Transform_Virtual_Stack_MT.readCoordinateTransform( "/path-to-transforms/image.xml" );
// read image
imp = IJ.openImage( "/path-to-image/input.tif" );
// apply transform
result = Transform_Virtual_Stack_MT.applyCoordinateTransform( imp, transform, 32, true, worldOrigin );
// show result
result.show();

The result image contains the transformed image with black background and the bounds adapted to the image information. For example:

If you need to place the image on absolute (world) coordinates, you can do it by resizing the canvas and using the origin coordinates provided by the applyCoordinateTransform method.

Let me know if you need more help.

ignacio

2 Likes

Hi,

I need to apply the transform that I’m obtaining from Register Virtual Stack to a series of x,y points (these points are aligned to my original set of images and I need to realign them to the registered stack). To do this I wrote a Python script to obtain the transformation matrices from the xml files and apply those affine transformations to the points, but I’m not getting the expected results.
To troubleshoot the problem I took a step back and I’m trying to use my Python script (I use the affine transformation routines in skimage) to apply it to the first slice of the unregistered stack and see if I can get the same result that I get from the plugin. But this also doesn’t work.
I seem to understand from this conversation that sometimes the Register Virtual Stack plugin will apply a translation to the image before the affine transformation but in the generated xml files I see no data related to any translation models. Another suggestion was that the plugin might be translating the image with respect to the previous image in the stack, but in this case I’m trying to transform the first image of the stack.

Any suggestions?
Many thanks in advance!

Adrian.

Hello @Adrian_Jacobo and welcome to the ImageJ forum!

Can you post here an example of the image, XML file and your python code so we can replicate what you are trying to do?

Thanks!

Hi,

I’ve managed to fix the script so I properly read the transformation matrix but the results are still not right.
Here I send you an example python script. You will also find one image and the transformation xml. In the Out directory there is the result of applying the transformation using the Fiji plugin and the python script. As you can see the image transformed by the plugin has some padding which adds an additional translation of the image but I can’t seem to figure out how that’s done to reproduce it in python.
I’ve tried uploading the example but it seems I’m not allowed to do it, here I send you a link to get it. Feel free to post it here for future reference.

Best,
Adrian.

OK, the answer is more or less included in my original post from Feb 18 and it has to do with bringing the image to absolute coordinates. Let me explain it step by step:

  1. Your original image has size 1024 x 768.
  2. After applying the affine transform stored in the XML file (using the plugin) the result image has size 1050 x 819. So there is some padding to avoid leaving any original pixel out of the new image.
  3. This padding makes the new origin (0,0) of the image be different from the original one. The equivalent original (what I called world above) coordinates of the new origin coordinates need to be calculated. In my script, they are provided by the applyCoordinateTransform method as output. If you use it, you’ll get worldOrigin = [47, -133].
  4. Adjusting the canvas using those coordinates and the new image size you get the plugin output (size 1097 x 901).

In your case, the python script does not pad the output image (it keeps the original image size) so you are risking to lose some image information after the transformation. On top of that, you need to bring each image to world coordinates so they share the same reference space.

I hope this helps!

3 Likes

Thanks for your answer. I understand the basic idea of what you say but I can’t seem to get the details right.
I’m assuming that in 3. what the applyCoordinateTrasnform method is doing is applying the affine transformation to the point (0,0). The transformation matrix in the example is:

m= \left[\left[\begin{matrix}a_0 & a_1 & a_2\end{matrix}\right] = \left[\begin{matrix}0.9 & 0.0417 & 47.469 \\\\ b_0 & b_1 & b_2 \\\\ -0.04 & 1.009 & -90.16 \\\\ 0 & 0 & 1 \end{matrix}\right] \cdot \left[\begin{matrix}0 & 0 & 1\end{matrix} \right]\right]

When I apply this transformation to (0,0) what I get is (47.469,-90.16). The scikit_image convention to apply the transformation m to a vector (x,y) is:

x' = a_0*x + a_1*y + a_2
y = b_0*x + b_1*y + b_2

(the coefficients a_n and b_n are the ones in the matrix m defined above)

From where it follows that for (0,0) the transformations always yields (a_2,b_2). The problem seems to be that applyCoordinateTrasnform is doing something different to (0,0). Can you give me some more details on what it’s doing? It’s curious that I get the right coordinate for x but not for y.

Thanks,
Adrian.

Not only, you are missing the padding part, which is the tricky thing.

With the padding that we do, we calculate a new bounding box of the image so all original pixels are still included inside the new image or bounding box. Therefore you need to calculate the transformed coordinates of all corners of the original image:
(0, 0) -> (47.469,-90.16).
(1023,0) -> (1064.551, -133.822)
(0, 767) -> (79.497, 684.272)
(1023, 767) -> (1096.579, 640.615)

There you are the magic numbers, the minimum coordinates in X and Y are then 47.469 and -133.822, 47 and -133 when casting to integer.

2 Likes

Ok, this has been very helpful, now I’m able to reproduce the transformation for a single image.
Now, if I transform several images the plugin calculates a new bounding box and crops them. I understand that this needs to be done to fix the fact that after the transformation each image has a different size. Can you give me some hints on how is this done? How do you keep all the images aligned after cropping them?

Sure! That’s the part of the code I sent you on my post yesterday. Basically you need to store the minimum world X and Y coordinates in total and the maximum width and height.

1 Like

Hi @iarganda! Sorry for reopening this thread but I have a similar problem as Adrian. I have a couple of images which I register using register virtual stack slices. Then using the obtained transformation file, I transform a ROI drawn in one of the images. This ROI, when placed on the other, was always translated a bit and then I realised it might be due to the padding added to the images after following this thread.

I corrected for it by just adding the transformed origin to all the coordinates (bring back the origin back to zero basically). Now I think this is not enough as the ROI is still offset by a few pixels. And I am not sure I understood your explanation on how to bring back all images to the same size again. Could you explain what you are supposed to do after you get the transformed coordinates of all the original image corners? Basically, how to undo the bounding box added to the images after registration so that the ROI is perfectly aligned in reference and target images?

Thanks a lot!
Gayathri

Sorry for the late replay! In case you didn’t find a solution yet:

That’s the problem, you don’t have to just correct based on the origin but the total minimum coordinates. In Adrian’s example the origin was going to (47.469,-90.16), but one corner was going to a lower y value: (1064.551, -133.822). So in the end the x-y correction was 47.469 and -133.822. Does it make sense now?

Hi @iarganda
So basically, you transform all four corners and take the least of x and least of y from them. I got it now.
I will give this a try, thanks a lot!

Gayathri

1 Like