As suggested by @tpietzsch, here is my BDV question, in case it may be of value to someone else as well:
I’m currently evaluating whether it would make sense to switch our STP image analysis pipeline over to the BigDataViewer and use the HDF5 format.
The way this microscope works is that it makes a mosaic of z-stacks from the top 50µm of the tissue sample, then automatically slices 50µm off using a vibratome and repeats that progress until the complete sample (usually a mouse brain) is imaged.
As a result, for each physical section we get 69 mosaic stacks, each with a size of ~1600x1600x10, times 3 for all three channels, times 300 for all sections. Currently, these 69 mosaic stacks are stitched and then saved as tiff images. This obviously results in a lot of time needed for assembling the images, but is also not ideal space-wise, since both the stitched and unstitched data needs to be archived in case something goes wrong during stitching.
Ideally, the way I would want to solve this is to have the individual image stacks remain in the HDF5, so that the placement of these stacks could be changed without modifying the underlying image data. The question is now, how to go about this. Conceptually, I suppose each stack would best fit into the “sources” category, which would give incredible flexibility, but is it reasonable to attempt to work with 69300*3=48600 sources? Or should that rather be handled at a lower level (e.g. a new CacheArrayLoader implementation)?