Big Data Viewer xml - affine transform

Dear all,

I would like to understand better the lines corresponding to the View registration in the .xml files from the Big Data Viewer, as I would need them for the mamut2r package I am currently developping and which aim is to provide tools to import MaMuT .xml files in R (see for more info).

I am currently working on the dataset provided in the “Get started” page of the MaMuT Fiji plugin (dataset here: and MaMuT manual here
In the ViewRegistration nodes of the MaMuT_Parhyale_demo.xml, I see two affine transforms per timepoint and setup stored as vectors (one named calibration and the second one Fast 3d geometric hashing…).

(1) In the affine transform vectors, which indices correspond to the translation, the scales, the skews and the angles? I need to know that in order to build the 4x4 matrix from its constituent using the buildAffine() function from the RNiftyReg package.

(2) I guess that I need to compute the composition of the affine transforms when there is two or more of them. If I got it right from what I read online, the order of the composition matter. Here the order would be calibration*Fast 3d geometric hashing, is that right?

(3) Can I assume that the structure of other .xml coming from the BigDataViewer will be similar, e.g. one or several affine transforms stored for each timepoint and setup in the nodes?

Many thanks in advance

1 Like

Hi @MarionLouveaux,

Sorry we were slow to respond to your post.

For MaMuT specific stuff, @tinevez or @tpietzsch will be the most help, but I can get you started.

  1. The affine transform vectors store the “top” three rows for the 4x4 affine matrix, and are row major.

  2. Yes order of composition will definitely matter. I would expect calibration to be applied first, (and the right-most matrix is applied first) so I bet its (3D-HASHING) * (CALIBRATION), but I’m not sure.

  3. It should be similar but it likely will not be the same. Edit: see Tobias’ post below

Good luck,


Yes, that is correct.
The transformation is from local (=pixel) coordinates to world coordinates
pos_world = (3D-HASHING) * (CALIBRATION) * pos_local

Yes, it is safe to assume that. Registration must be present for each timepoint / setup combination. And it is always one or several affine transforms.
(In theory, it could be other kinds of transforms, defined by giving a java class name in the xml file
but this was never used in practice. And even the possibility should probably be removed.)


Thanks a lot @bogovicj and @tpietzsch!!! Your answers are very helpful :grinning:

Just for reference if someone bumps into this topic (like me) and needs some more help figuring this out:
I found this mathexchange topic very helpful.

I have also implemented some convenience functions in python, see pybdv.transformations.
For now, I have only implemented scaling, but extending this to translations or rotations should be easy.


Reviving this old thread, but I’d like to know whether anyone would be interested in supporting these transforms other than Affine3D. Supporting ThinPlateSpline BSpline, or even a generic serializable RealTransform would be really nice. But where it makes the most sense is for BigWarp. This would mean refactoring AbstractSpimSource and probably a bunch of other class, but I don’t know how deep we would have to go, and how interested people are (I, @Christian_Tischer, @schorb are!)

@Christian_Tischer @tpietzsch @bogovicj