This looks much more reasonable to me than the model returned by AffineModel3D.
I also looked at the distances of the point pairs before and after transformation (using applyInverse()), and while the mean square distance is reduced, I noticed a high variation and an overall worsening of the point pair distances in z:
I noticed that there are different slice numbers and file sizes when using the Rigid(3d) or Affine(3d). Take your images as example, the slice number and file size of fused image are 21 and 21MB. It looked reasonable. The image properties of both stacks are unit=pixel ; x=1,y=1,z=1.
When the image properties are set as x=0.02 um, y=0.02 um, z=0.2 um, the slice number and file size of fused image are 228 and 230MB. In another setting, I used x=0.2 um, y=0.2 um, z=0.2 um, the slice number and file size of fused image is 21 and 21MB. But in the real acquisition setting, I more often have smaller pixel size of x and y and larger voxel depth. I haveno idea what factors cause the change of slice number and file size.
@imagejan@chin@StephanPreibisch Sorry for the late reply. At first glance this looks like an
ill-shaped problem. All z-corrdinates are more or less in the same plane ±noise. I assume that Matlab has some mechanism to treat such a situation differently. AffineModel3D, at this time, implements only the most direct way to invert the matrix. Before digging into this: Do you think that your data carries any information about z? If that is not the case, then you should use an AffineModel2D. If this turns out to be the core of the problem (and not some obvious bug), then either the surrounding plugin should do this recognition or we would have to implement an AffineModel3D that recognizes such an ill-defined situation and then makes an educated decision about which solver to use. I do not, however, want this in the base implementation because smartness always costs runtime and we’re using the Model.fit() method in iterative solvers, i.e. being smart may hurt.
Hi, I agree, I just had a look at the data. It looks pretty co-planar. To use an AffineModel2D, (max)-project both images, compute an AffineModel2D, then run the plugin again on the 3D images and check on the very first dialog box “Re-apply models”. This will apply the 2D Affine Model plane-by-plane.
I assumed that the limited spread in z was the cause of the problem, but was confused that the model after fitting would actually make things worse than before (which is not what a novice user of the Descriptor-based registration plugin would expect).
We can’t ignore z unfortunately, because we observe a translation in z that needs to be corrected. But what seems to work well for us is using a 3d similarity model (thus allowing for translation, rotation and scaling, but taking away the shear-related degrees of freedom), which John Bogovic added to mpicbg in August and I now made available from Descriptor-based registration (thanks @StephanPreibisch for merging my changes!).
I learned the link at github you mentioned “made available from Descriptor-based registration”. It shows “Add option to use SimilarityModel3D; This required updating the mpicbg dependency to version 1.1.1.” Could you tell me how to update the mpicbg dependency to the new version? Thanks!
Thanks for your explanation. It will be released via the updater eventually in coming future…I will try the way you taught currently and will let you know if I have any problems. Best regards,