Incorrect calculation of Affine Transformation in Descriptor-based registration plugin

I’m facing an issue when measuring chromatic aberration between two channels using @StephanPreibisch’s Descriptor-based registration (2d/3d) plugin.

When comparing two images of fluorescent beads acquired at different wavelengths, choosing an Affine (3d) model leads to erroneous results. These are the two stacks saved as ZIP from within ImageJ:

Channel1.zip (7.1 MB)

Channel2.zip (8.9 MB)

While using a Rigid (3d) model gives reasonable results:

3d-rigid: (0.9999995370077311,  -9.619443093375998E-4, -2.5445359810519136E-5, -0.18273322111835735,
           9.619420066700193E-4, 0.9999995332518842,   -9.03525678586964E-5,    0.9090712305010016,
           2.553226207243114E-5, 9.032804906567956E-5,  0.9999999955944722,     0.3712261063783586)

… using Affine (3d) shifts the first image off, such that the beads do not align at all any more:

3d-affine: (1.0021692039342023,   -7.988350974716951E-4,  0.5824109663517447,  -7.121635430846575,
            0.0018053103497930323, 1.000650764791395,     1.2441180286114104, -13.215184111686032,
           -2.736393668926716E-4,  9.479495003993435E-4, -0.12079485139830126, 12.53278056716039)

The following macro reproduces the issue:

open("http://forum.image.sc/uploads/imagej/original/1X/5fff97e62514000a26b0755d8f0671785cf77893.zip");
open("http://forum.image.sc/uploads/imagej/original/1X/ae956102724b30b7492d71ef7dcff5e0c0959e65.zip");
run("Descriptor-based registration (2d/3d)", "first_image=Channel1.tif second_image=Channel2.tif brightness_of=[Interactive ...] approximate_size=[Interactive ...] type_of_detections=[Interactive ...] subpixel_localization=[3-dimensional quadratic fit] transformation_model=[Affine (3d)] images_pre-alignemnt=[Approxmiately aligned] number_of_neighbors=3 redundancy=1 significance=3 allowed_error_for_ransac=5 choose_registration_channel_for_image_1=1 choose_registration_channel_for_image_2=1 create_overlayed");

This image (Channel1: green, Channel2: blue) illustrates how the beads are misaligned after running the plugin:

Digging through the source code to find a potential bug, I finally ended up at process.Matching#pairwiseMatching, but I’m stuck how to best test this.

@StephanPreibisch, @axtimwalde any comments?

Thanks in advance for your help.

1 Like

After some more research, I found some suspicious calculations of matrix determinants in mpicbg.models.AffineModel3D (which is not very DRY, by the way):

  • in fit (double[][] p, double[][] q, double[] w):
  • in fit (float[][] p, float[][] q, float[] w):
  • in fit (Collection< P > matches):

If I understood correctly, the correct calculation would be:

final double det =
    a00 * a11 * a22 +
    a01 * a12 * a20 +
    a02 * a10 * a21 -
    a02 * a11 * a20 -
    a12 * a21 * a00 -
    a22 * a10 * a01;

Could someone please confirm that this might be the cause? I didn’t have the time to come up with a proper test for it, but I’m willing to submit a pull request if that fix is correct.

Oh, I take that back, I overlooked that the matrix is (supposed to be) symmetric at this point, so the calculations are correct as they are.

I’d be happy to get any advice where to look for the cause of my original problem, though.

Hi Jan,

You could have a look at the similiar issue raised by Olivier Burri at imagej forum(nabble.com). Hope this message could give you some points.

http://imagej.1557.x6.nabble.com/Descriptor-based-registration-3D-on-beads-gives-poor-results-td5000391.html

Best,
chin

@chin Thank you for pointing me to this post, I had missed it so far.

Citing from that discussion:

I already considered that anisotropic sampling might cause troubles in my case as well, but there seem to be no problems with peak detection:

  • the plugin finds the same number of candidates in both images
  • sub-pixel localization seems to be fine (as far as I can tell)
  • it does work well when using the Rigid model, but fails only with Affine

So I assume there must still be a problem specific to the AffineModel3D

Yes, while the Nat. Meth. paper (and supplementary material) is a useful resource, already a little more explanatory javadoc would be very helpful :wink:

1 Like

I looked a bit more into the issue, and extracted the point coordinates of the beads using Matching.extractCandidates():

15.43407   118.4631   10.95225
23.95161   189.8237   10.95459
40.96067   399.3395   11.08748
43.76619   150.3128   11.01
67.97006   464.5647   11.14603
89.45972   420.8268   11.10822
105.2823   252.0132   11.04555
.          .          .
.          .          .

The exported coordinates in subpixel resolution seem all fine.

Using these coordinates, I ran the following Groovy script to fit an AffineModel3D:

import mpicbg.models.AffineModel3D;

a= [[15.434072,    118.46311,    10.952252],
[23.95161,    189.82367,    10.95459],
[40.960674,    399.33954,    11.08748],
[43.76619,    150.31279,    11.009996],
[67.97006,    464.56473,    11.146028],
[89.45972,    420.82678,    11.108215],
[105.28226,    252.01321,    11.045548],
[109.44038,    427.18225,    11.092786],
[113.30713,    229.50784,    10.992185],
[128.27286,    459.59274,    11.145987],
[132.80798,    405.80847,    11.114774],
[144.65749,    298.88016,    10.997809],
[150.35008,    40.110073,    10.84678],
[165.03986,    342.00388,    11.039748],
[194.90036,    384.04898,    11.049682],
[289.29044,    462.5165,    11.085374],
[295.09573,    452.19348,    11.074145],
[325.4797,    413.4804,    11.028268],
[358.28165,    466.71954,    11.04538],
[381.8618,    100.59195,    10.840279],
[388.5433,    490.98703,    11.147079],
[398.84198,    323.97894,    10.9963],
[419.34824,    98.5917,    10.797312],
[437.37817,    293.4611,    10.886358],
[454.27036,    488.56296,    11.074659],
[492.7722,    89.76044,    10.838887]];

//println(a);

b = [[14.861066,    119.465096,    11.229203],
[23.153625,    190.55356,    11.325839],
[39.960358,    400.47815,    11.571323],
[42.946514,    151.00925,    11.269929],
[67.094986,    465.64297,    11.6274395],
[88.742325,    421.832,    11.574621],
[104.649506,    253.10426,    11.303434],
[108.3977,    428.46942,    11.539685],
[112.60796,    230.27635,    11.358875],
[127.60289,    460.66956,    11.609537],
[132.10135,    406.78476,    11.543575],
[143.98784,    299.7424,    11.344606],
[150.05653,    40.651775,    11.142687],
[164.54166,    343.27408,    11.38195],
[194.5087,    385.09262,    11.461491],
[288.96295,    464.2268,    11.511548],
[294.7023,    453.58936,    11.509766],
[324.9036,    415.1506,    11.439083],
[358.2236,    468.41248,    11.535271],
[381.70197,    101.64196,    11.174496],
[388.29242,    492.62183,    11.5207815],
[398.53986,    325.3781,    11.397941],
[419.23538,    99.69395,    11.1963625],
[437.44797,    294.90387,    11.284115],
[454.7347,    490.30087,    11.491287],
[492.82825,    90.981766,    11.147794]];

//println(b);

c = [1, 1, 1, 1,  1, 1, 1, 1,  1, 1, 1, 1,  1, 1, 1, 1,  1, 1, 1, 1,  1, 1, 1, 1, 1, 1]

model = new AffineModel3D();
model.fit((double[][])a.transpose(), (double[][])b.transpose(), (double[])c);

println(model);

for (e in b) {
    println(model.applyInverse((double[]) e));
}

This model after fitting is:

3d-affine: (1.0020359277675022, -2.787292921979656E-4, -0.06733132363933692, -0.08200853197846536, 0.001344707430158465, 1.0014533293751562, -0.11278230605239514, 1.6600865363593664, -2.2449662858492654E-4, 9.613515037913568E-4, 0.031991622777994166, 10.795421836355652)

Again, the last number (10.79) is suspiciously high, given the similarity of the original points.

To ensure that it’s actually possible to calculate a reasonable Affine Transform from those coordinates, we were using Matlab to get the following matrix from the same bead image data:

 0.9976    0.0001    0.0012    1.0053
-0.0018    0.9980   -0.0001   -0.1085
 0.0003    0.0000    0.9994   -0.4754
 0         0         0         1.0000

This looks much more reasonable to me than the model returned by AffineModel3D.

I also looked at the distances of the point pairs before and after transformation (using applyInverse()), and while the mean square distance is reduced, I noticed a high variation and an overall worsening of the point pair distances in z:

Z distance before   Z distance after
-0.28                0.85
-0.37               -0.09
-0.48               -1.44
-0.26                0.39
-0.48               -1.38
-0.47               -1.23
-0.26                2.00
-0.45               -0.10
-0.37               -0.52
-0.46               -1.39
-0.43               -1.01
-0.35                1.79
-0.30                0.14
-0.34                1.82
-0.41                0.39
-0.43                0.57
-0.44                0.26
-0.41                1.06
-0.49               -0.57
-0.33               -0.66
-0.37                0.50
-0.40               -0.90
-0.40               -1.71
-0.40                1.36
-0.42                0.81
-0.31               -0.93

Do you agree that there’s something wrong with AffineModel3D here?
I would appreciate help to debug this.

EDIT: I filed an issue at github:

1 Like

I noticed that there are different slice numbers and file sizes when using the Rigid(3d) or Affine(3d). Take your images as example, the slice number and file size of fused image are 21 and 21MB. It looked reasonable. The image properties of both stacks are unit=pixel ; x=1,y=1,z=1.

When the image properties are set as x=0.02 um, y=0.02 um, z=0.2 um, the slice number and file size of fused image are 228 and 230MB. In another setting, I used x=0.2 um, y=0.2 um, z=0.2 um, the slice number and file size of fused image is 21 and 21MB. But in the real acquisition setting, I more often have smaller pixel size of x and y and larger voxel depth. I haveno idea what factors cause the change of slice number and file size.

Yes, this is related but actually seems to be a different bug: there’s a zStretching parameter that is calculated from the input image calibration:

This parameter is used when finding peak correspondences, and the image calibration is used when applying the model to create a fused image. Not sure where the bug is, though. Further investigation needed.

@imagejan @chin @StephanPreibisch Sorry for the late reply. At first glance this looks like an
ill-shaped problem. All z-corrdinates are more or less in the same plane ±noise. I assume that Matlab has some mechanism to treat such a situation differently. AffineModel3D, at this time, implements only the most direct way to invert the matrix. Before digging into this: Do you think that your data carries any information about z? If that is not the case, then you should use an AffineModel2D. If this turns out to be the core of the problem (and not some obvious bug), then either the surrounding plugin should do this recognition or we would have to implement an AffineModel3D that recognizes such an ill-defined situation and then makes an educated decision about which solver to use. I do not, however, want this in the base implementation because smartness always costs runtime and we’re using the Model.fit() method in iterative solvers, i.e. being smart may hurt.

Hi, I agree, I just had a look at the data. It looks pretty co-planar. To use an AffineModel2D, (max)-project both images, compute an AffineModel2D, then run the plugin again on the 3D images and check on the very first dialog box “Re-apply models”. This will apply the 2D Affine Model plane-by-plane.

@axtimwalde @StephanPreibisch thanks for your answers!

I assumed that the limited spread in z was the cause of the problem, but was confused that the model after fitting would actually make things worse than before (which is not what a novice user of the Descriptor-based registration plugin would expect).

After reading the publications by Schaefer et al. and Zhu and Gortler, I have a slightly better grasp on the issue.

We can’t ignore z unfortunately, because we observe a translation in z that needs to be corrected. But what seems to work well for us is using a 3d similarity model (thus allowing for translation, rotation and scaling, but taking away the shear-related degrees of freedom), which John Bogovic added to mpicbg in August and I now made available from Descriptor-based registration (thanks @StephanPreibisch for merging my changes!).

Hi Jan,
I learned the link at github you mentioned “made available from Descriptor-based registration”. It shows “Add option to use SimilarityModel3D; This required updating the mpicbg dependency to version 1.1.1.” Could you tell me how to update the mpicbg dependency to the new version? Thanks!
chin

@chin Until these versions are available via the updater, you’ll have to git clone https://github.com/bigdataviewer/Descriptor_based_registration.git and build it using Maven, e.g. on the command line:

git clone https://github.com/bigdataviewer/Descriptor_based_registration.git
mvn -Dimagej.app.directory=/path/to/your/ImageJ.app -Ddelete.other.versions=true

This will automatically pull in the mpicbg dependency for you and copy all required files into your ImageJ installation path.

Hi Jan,
Thanks for your explanation. It will be released via the updater eventually in coming future…I will try the way you taught currently and will let you know if I have any problems. Best regards,
chin

Hi, you could simply use an AffineModel2D first as described above and then run again a TranslationModel3D…