Reproducibility of BigStitcher Reconstruction

Hello imaging people,

I’ve been working for over a year on the same topic of imaging and analyzing a specific form of biological granules which I’ve been attempting to reconstruct using BigStitcher for multiple views (not tiled). While I have succeeded to do so on several occasions (and failed on many others), my big concern is the reproducibility of the process.

My understanding is that the detection of interest points (IP) is reproducible (something I’ve confirmed with my control samples), but the registration is not, and that the randomness is inherent to the RANSAC algorithm. With this, my question is two-fold:

  1. Am I right in blaming the (albeit slight) differences in IP registration results (i.e. error, nb of corresponding IP,…) on RANSAC?

  2. How can I remedy this so that my protocol for reconstruction is reproducible? And if not, how can I justify the differences in reconstructions of the same control samples using the same parameters?

Thanks in advance!

As far as I understand RANSAC is just used to exclude false positive descriptors from true positive descriptors. Meaning that trues positive descriptors would all point towards the same transformation model.

Maybe the variation you see comes more from the global optimization when finding the global transformation of all views against each other, which is also iterative and also not deterministic. Especially since the image information is degraded differently between different views.

The problem here is more within certain fields like registration (or deconvolution etc) there exist computations that will not be 100% accurately solvable within a reasonable computation time. So this means there will be either a result with error or no result at all. The question would be how large is the error and how large are the structures of interest?

In my case reconstructing embryos with single cell resolution where each cell is resolved by much more than >10 voxels I think an error of 1 voxel would be acceptable.

In any case random variation in data and data analysis is normal. For example imaging the same object with the same parameters will lead to slightly variable data. The only important thing is how large is the effect vs how large is the variability and error in my data acquistion and analysis.

Thanks for the info, cleared things up in this regard, I still need to look into the parameters that will determine whether I accept or reject the results (error, variability allowed ,… )

Thanks again

1 Like

The way I approached it in the past is to do visual quality controls after the alignment. The BigStitcher Interface is quite useful for that since it allows you to overlay with different colors and see the overlap. Just rotate the view accordingly. The embryo shape needed to be preserved and cells needed to overlap. Here I have to say that degradation in z was usually anyways very large, larger than any error in alignment could conceivable be, but this depends on the microscope and lenses that one uses.

Another indication is the transformation error the software gives you after the global optimization. Below 1 pixel great, 1-2 pixels okayish, beyond that bad…

Also I used beads for the alignment and found that to be more robust than sample based alignments.

I have to say however that I usually just did the multiview registration and then did not fuse the data. As I then just looked at individual stacks for my further analysis.

Thanks for the tip on transformation error,. for the beads we have used them in the past but I have better alignment results without them.

As for the visual controls in Bigstitcher, I agree the interface makes it easier to compare rounds of registration, but do you know whether using it to prealign manually is recommended or is it introducing more error and bias?

The more prior information you give the registration the more effective the optimization will be at finding a global minimum. It is not so much about bias. A bigger problem for the optimization is that it can get stuck in a local minimum. Of course it will not help if the initial transformation is way off but just getting it close is usually enough to kick it in the right path so to speak.

Just think of this process as navigating a mountainous landscape… with the aim to arrive at the valley floor. On the way there are many different false pits one might fall into. The closer one is to the valley when starting the less likely one gets trapped in those.

1 Like

This has been very helpful, thanks again !