Multiview reconstruction suddenly lacks Z (failed to reproduce fusion)

recently (9/24/2019) I have successfully fused one dataset of 5 angles (yay!). It was great. However, as I tried it again a some time later (10/31/2019), suddenly I failed to reproduce the fusion and I seem to have lost Z… Same happens to all my other datasets.
I attached the bigdataviewer screenshot from former and latter .xml file. I tried to follow everything as before and also tried other settings but I keep failing to get it right.
Was there any update that might cause problems with Z after fusion when using .czi files without conversion to HDF5?


1 Like

Maybe it is just a problem in reading out or specifying the correct z size.
If the calibration is incorrect the registration will fail.

You can try to change the calibration with:
Multiview Reconstruction > Batch Processing > Tools > Specify Calibration
I think when right clicking on the dataset in the view setup explorer there should be also such a option.

Such metadata will be written in the .xml file.
For troubleshooting further it would be great if you could post this .xml.
It also helps to post an issue on github:
The developers usually communicate via this platform and are quite responsive there.

many thanks! I looked around but I could not find the calibration problem in the xml… (attached. dataset 2 does not work). weird is, that I used the same file to start with so the metadata was also the same in the input.

dataset 2.xml (14.1 KB) dataset.xml (15.6 KB)

I agree the .xml seems fine. I guess this could then be a bug.

@tpietzsch @hoerldavid could help you address that.


thank you! I also wrote on github… should I write @tpietzsch @hoerldavid a private message?

You could try first with @hoerldavid maybe. But I don’t know how more effective a private message would be compared to including them with the @ thing =)

I had a look at the xml files. They both have a long-ish sequence of transformations for each view.
dataset.xml has

  1. AffineModel3D regularized with an AffineModel3D, lambda = 0.1
  2. AffineModel3D regularized with an AffineModel3D, lambda = 0.1
  3. AffineModel3D regularized with an AffineModel3D, lambda = 0.1
  4. Rotation around y-axis by 301.0 degrees
  5. Center view
  6. Translation
  7. calibration

dataset 2.xml has

  1. AffineModel3D regularized with an AffineModel3D, lambda = 0.1
  2. AffineModel3D regularized with an AffineModel3D, lambda = 0.1
  3. Rotation around y-axis by 301.0 degrees
  4. Center view
  5. Translation
  6. calibration

Transformations up to the “AffineModel3D regularized with an AffineModel3D, lambda = 0.1” are identical for both xmls.

There is one more “AffineModel3D regularized with an AffineModel3D, lambda = 0.1” transformation in dataset 2.xml and also these transformations look wrong, for example first view has scaling in
X=0.47, Y=0.30, and Z=0.01, which would explain the squashing…

So to me this looks like BDV is just displaying what is there in the file correctly.
Are you sure you did the same steps to produce the registrations in both cases?

1 Like

I also looked at the files and agree with @tpietzsch, they seem fine, so something probably went wrong during the registration.

What I found weird is that for dataset.xml, the “AffineModel3D regularized with an AffineModel3D, lambda = 0.1” registrations for view0 is always identity. For dataset2.xml it is not, which could mean that you did the registration without fixing any image and then the optimization converged to squashing everything.

Anyway, to help more, could you tell us what parameters you used in Register using Interest Points?

1 Like

thanks for looking into it! I was getting totally desperate :smiley: doing the different combinations of parameters over and over again…

@hoerldavid awesome! fixing the views solved my problem! this one somehow slipped through. @tpietzsch so there must have been a difference after all.

:partying_face::partying_face::partying_face: thanks a lot!


Great to hear that this solved the problem :smiley:

Here is the explanation of the problem (also in case someone else with a similar problem stumbles upon this thread):
If you use an Affine transformation model, this allows the registration to arbitrarily shift, rotate and scale the images to move corresponding interest points as close as possible. The closest you can get with this model is to shrink every image to a point. By fixing at least one image, you tell the registration to leave that image as it is, thereby removing the possibility of collapsing all images.

So long story short: If you use an Affine registration model, be sure to fix at least one image/view.