[NEUBIAS Academy@Home] Webinar "Image Big Data II: Registration & Stitching of TB datasets" Questions & Answers

Hi everyone,

On January 19, my colleagues @Sebastien, @bogovicj, @StephanPreibisch and I held the the second part of the Image Big Data webinar series of the NEUBIAS Academy@Home, dealing with Big Data Stitching & Registration.

Thanks to the whole NEUBIAS team for organizing the lecture series and especially to @Ofra_Golani, @aklemm, @djpbarry, @Julien_Colombelli, @RoccoDAntuono and @romainGuiet for moderation during the webinar!

You can find the recording of the webinar on the NEUBIAS YouTube channel here.

In this post, we post our answers to all the questions that were posted during the webinar.

If you have further questions, feel free to ask them here :slight_smile:

Part I: BigStitcher

Question 1: Hi, I have a question about data loading using Bio-Formats: When there are >~1000 images (series) in a single file, bio-formats becomes extremely slow in loading them (getting a single image out can take up to many hours). This seems to be a known problem, already for a number of years, and apparently there is no easy fix. Do you experience this issue as well, or do you know a clever way to circumvent it?

Answer by Stephan: Tobias Pietzsch started to integrate “smarter” rendering strategies, so this will improve over time. We are aware of this limition and are working on it.

Question 2: Does Big Sticher work with datat sets that have already been stiched by vendor software, ie. can it refine the stiching?

Answer by Stephan: Unfortunately usually not, because as far as I know they all “fuse” the result into one single image that cannot be changed anymore.

Question 3: is it possible to export subsets of the stitched data as tif stack?

Answer by Stephan: Yes, you define arbitrary bounding boxes and export them as TIFF/HDF5 or display as ImageJ image.

Question 4: Hi David! really awesome, thank you so much for BigStitcher! We use it for our Zeiss Lightsheet and are really happy! I was wondering, do you think it can also be used on data from the new Zeiss Lattice Lightsheet? Will there maybe also be a reader to define a Zeiss Lattice lightsheet dataset?

Answer by Stephan: Thanks so much! We would love to add it, it - should - work! We would just need access to an example dataset. Maybe you can contact us afterwards if you have one? Would be great!

Question 5: What’s the recommended way of splitting .czi files from the lightsheet into individual TIFF files?

Answer by Stephan: The autoloader that David described, which is part of BigStitcher, can do that for you. You can resave the imported CZI as TIFF instead of HDF5/N5.

Question 6: is it encouraged to downsample the image towards isotropy?

Answer by Stephan: Yes, and this is actually the default suggestion for the multiresolution pyramids that are computed for resaving to N5/HDF5.

Question 7: In the detectable patterns in the filename while defining a new dataset, how does “Z-Planes (experimental)” work (or how will it work) ?

Answer by David: If you have 2d slices of a stack as single files (with a number): eg: filename[z-index].tif we will combine those 2d slices into one 3d volume (some more details: error loading 16-bit grayscale tiles (754 tiles, 3 channels per tile, 3 z-slices per channel) · Issue #16 · PreibischLab/BigStitcher · GitHub).

Question 8: I wonder what is difference between N5 and HDF5? Which one do you prefer to downsize the file?

Answer by John: HDF5 stores your image data as blocks in a single file, where N5 usually stores the blocks as separate files. N5 is compatible with and more general than HDF5, that is, most / all tools that use N5 can read HDF5. HDF5 might be better if you have other (e.g., non-Fiji tools) that need to access your data, however, writing to HDF5 can be slower than to N5 since it can not be parallel.

Question 9: David/Stephan: Is it possible to import HDF5 files (and not TIFFs) for defings a dataset that need to be stitched/fused? Thanks!

Answer by Stephan: Right now we support everything that Bioformats supports. If you have previously exported multi-resolution HDF5s, there is no straight forward way (except re-saving) right now, but could be added if it is required …

Question 10: @Stephan Thanks for the presentation and tutorial. I having been trying to test BigStitcher as I was listening to you with my own data (a 12x7 mosaic in Olympus, each a Z-stack with about 40 planes, oib files). I am good until the point where it asks to resave as HDF5, then it runs for a while and errors to “Buffer too small (got 1048576, expected 2097152)”. Any ideas?

Answer by Stephan: Mmh, no, idea right now. But could be a Bioformats issue rather than a BigStitcher problem, but not sure. If re-saving to TIFF and importing then works, this is likely. You can always open an issue that we try to address when we can: GitHub - PreibischLab/BigStitcher: ImgLib2/BDV implementation of Stitching for large datasets (this is being followed up at Problem stitching mosaic of oib Olympus files · Issue #89 · PreibischLab/BigStitcher · GitHub)

Question 11: Is BigStitcher also usable to register long timeseries (2 or 3D) over time, or is it not really suitable for this?

Answer by Stephan: Yes, you can use BigStitcher for timeseries alignment based on interest points. It offers many transformation models to achieve seemless time-series stabilization, even in the multi-view case. (It is not described in the paper as we focused on cleared data).

Question 12: Can BigStitcher cope with non-overlapping tiles?

Answer by Stephan: Yes, to some extent. You can handle several sets of non-connected tiles. BigStitcher can use the metadata information to move them relative to each other even if no overlap can be computed (check the weak links / strong links explanations in the paper for more details).C

Question 13: Hello!! Very nice talk. My understading is that for this tool to work we need overlapping images. Is it true? It may seem an odd question, but i am working with Imaging mass cytometry. I have stacks made by 30 channels that I would like to combine to build a bigger image. The overlapiing area is few pixels or nothing. For some of them I have x and y coordinates. Can I use BigStitcher to combine them?

Answer by Stephan: You could use it, but not the actual alignment procedure. But you can interactively move tiles around, visualize and fuse for example.

Question 14: What “special” kind of tile configurations can BigStitcher handle? More and more microscope vendors these days offer some variation of “smart tiling” which is sometimes not in a regular grid fashion. What should one make sure of when trying to stitch such non-standard tilings using Bigstitcher (tile-config, metadata, coordinates…)?

Answer by Stephan: BigStitcher by default loads the metadata stored in the vendor format, so as long as that works any kind of configuration should be supported. Additionally, you can import text files that contains the approximate tile location after importing the data. So any arrangement should not be a problem.

Question 15: can it also handle a non rectengular array of images (for example missing corner images)

Answer by David: This is similar to the question above. BigStitcher supports any arrangement of tiles. Each tile can have its own location (and rotation actually) assigned. A grid is a special, simpler case.

Question 16: Is it possible to vizualize all the final pairwise links of the dataset at the same time in BigStitcher ?

Answer by Stephan: Yes, you can overlay them after running the global stitching process … just press 'l"

Question 17: Is there any shading correction done during the stitching ?

Answer by Stephan: We optionally adjust the intensities of each stack so that overlapping pixels have the same intensity. This is achieved by a global optimization between the intensities (same as for registrations), where here we change image brightness. The original concept was introduced here: https://academic.oup.com/bioinformatics/article/33/16/2563/3104469 We also support on-the-fly correction of shading per-tile (if you can provide a bright and dark image BigStitcher Flatfield correction - ImageJ)

Question 18: Which characteristic should have the pc to could use this tool with this big data? I will like to work with an image of 100 Tiles with 20x objective and 5 Z planes (1.5 um Z-size) obtained in a Leica microscope

Answer by Stephan: We designed BigStitcher so it even works on a notebook as long as you can fit the data on your harddrive.

Question 19: does it provide only Max projection, or also other kind of 2d Projection?

Answer by Stephan: BigStitcher does not implement any projection, but you can do any kind of projection that ImageJ/Fiji supports on fused data.

Question 20: This demo is great for getting started- is there a resource/manual which outlines similar and more complicated workflows such as doing the chromatic aberration correction, content-based multi-view fusion, and parameter tuning for example? There are so many great features but I’ve struggled to access them.

Answer by Stephan: We started a YouTube playlist where we show different use-cases for how to use BigStitcher. If you have a specific question, please contact us and we’ll make one for this use-case. Here is the channel: BigStitcher HowTo - YouTube

Question 21: Can we assess the quality of stitching and refinement by some quality metrics (instead of just basing our judgement on the “feeling” that some method is worse than another one)?

Answer by Stephan: It does show the cross-correlation value for the overlap of each pair of images, which seems to be a reasonable measure. This is also used in the automatic determination of correct overlaps, together with measures of global consistency of all overlapping tiles.

Question 22: Hi, how are fiducal points found for chromatic corrections? I have some beads in my sample but also other punctate looking things, are there any knobs to turn there?

Answer by David: We do blob detection using Difference-of-Gaussian. In the right click menu there are only a few presets, but you can go to Multiview mode and have more control over point detection there. BigStitcher Interest points - ImageJ

Question 23: Thanks for the great presentation! I was wondering how registration errors were computed?

Answer by Stephan: It depends on. For stitching we compute cross correlation, for interest-point based alignment the distance between corresponding points. Additionally, there is a measure of global consistency of all pairs of images, which is displayed as the errors after global optimization.

Question 24: Could you use bigstitcher to do a rough alignment of serial tissue sections? That is the images will have similar structures but not exactly the same. Instead of finding overlap in image-edges it would find over the entire sample.

Answer by Stephan: I don’t feel that it is the right software for this. TrakEM2 would be a better choice for example I think (assuming these are multiple 2D sections).

Question 25: In mosaicJ and BigStitcher, can you use ‘landmarks’ that are actually fiducials? And second, can you base the entire alignment based off of these fiducials? For example gold particles.

Answer by Stephan: Yes, this is supported through interest-point-based registration and usually works very robustly, see for example the underlying paper for the matching of beads: https://www.nature.com/articles/nmeth0610-418

Question 26: Is there GPU accelaration?

Answer by Stephan: Unfortuantely only for deconvolution right now, but it could be added

Question 27: what method do you use for deconvolution of such large samples?

Answer by Stephan: We use a derivative of Lucy-Richardson Deconvolution that was re-derived and optimized for multi-view acquisitions: https://www.nature.com/articles/nmeth.2929

Question 28: @DavidH: Does the deconvolution requires to pre-acquire beads PSFs? Or beads embedded in the sample? Thanks

Answer by Stephan: Both works, you can load a simulated/measured PSF or extract it from the data based on e.g. beads that were embedded around the sample.

Question 29: @Stephan, thank you :slight_smile: in the imagej page it says that for the deconvolution you need to compile the required libraries yourself. Is there a guide on how to do this?

Answer by Stephan: Only if you want to use GPU support. The CPU version is also quite efficient. There is a github page for the GPU libraries that explains how to do the GPU compliation: GitHub - StephanPreibisch/FourierConvolutionCUDALib: Implementation of 3d non-separable convolution using CUDA & FFT Convolution

Part II: MosaicExplorerJ

Question 30: In mosaicJ and BigStitcher, can you use ‘landmarks’ that are actually fiducials? And second, can you base the entire alignment based off of these fiducials? For example gold particles.

Answer by Julien: In MosaicExplorerJ, the XY alignment of the mosaic is guided by the user, who picks landmarks manually (as will be shown in the demo), so sample-signal or fiducials will work equally good (and if the sample has a too homogeneous signal, fiducials are better). The fiducials should however be homogeneously distributed so that some of them are visible in the overlap regions between the tiles. There are only 4 parameters (degrees of freedom) for the overall XY alignment of the mosaic and they are basically adjusted from 2 pairs of matching landmarks.

Question 31: Is the max projection intensity greater over the overlaping regions or the algo equalizes it?

Answer by Julien: You can choose the blending method for the overlap regions, the function “add” serves the purpose of visualizing the overlap region clearly, but the other methods (e.g. copy, max) will equalize the intensity.

Question 32: Hello, can this macro be used to correct for a drift taking place in a time series (of a z-Stack)?

Answer by Sébastien: Not currently, we don’t support time-lapses.

Question 33: @Sebastien do you have a more color-blind-friendly option for visualization of the alignment (e.g. green-magenta instead of red-green)?

Answer by Sébastien: Excellent point. This is in the new release (v 1.5).

Question 34: Can you say something about the specifications of the system you ocnnected to, and what are some guidelines depending on image size?

Answer by Sébastien: It is a Acquifer Hive but all operations could be performed as efficiently on any computer as long as the data fits on local storage.

Part III: BigWarp

Question 35: BigWarp questions. I’m quite surprised that youve chosen the EM as moving image, we tend to use the EM as target to keep the reolution. Could you explain the reason? Also, been having issues with the latest version of Fiji not allowing the F8 button to offer the choice of transformation. I had to use a version from Jan 2019. When will this be fixed in the Fiji update?

Answer by John: EM sample prep could induce deformation, so EM image is used as moving image. But is also possible to do the contrary. Specify if F8 button fuction

Question 36: Are there methods to verify warping or quantify warping error other than by eye?

Answer by John: Warp verification is difficult. There are methods that are not “by eye” but they require a different, independent image channel, or some other ground truth, such as point annotations which are usually placed by humans and are therefore similar to “by eye” verification.

10 Likes