Using BigDataViewer for Serial-Two-Photon (STP) images

As suggested by @tpietzsch, here is my BDV question, in case it may be of value to someone else as well:

I’m currently evaluating whether it would make sense to switch our STP image analysis pipeline over to the BigDataViewer and use the HDF5 format.
The way this microscope works is that it makes a mosaic of z-stacks from the top 50µm of the tissue sample, then automatically slices 50µm off using a vibratome and repeats that progress until the complete sample (usually a mouse brain) is imaged.

As a result, for each physical section we get 69 mosaic stacks, each with a size of ~1600x1600x10, times 3 for all three channels, times 300 for all sections. Currently, these 69 mosaic stacks are stitched and then saved as tiff images. This obviously results in a lot of time needed for assembling the images, but is also not ideal space-wise, since both the stitched and unstitched data needs to be archived in case something goes wrong during stitching.

Ideally, the way I would want to solve this is to have the individual image stacks remain in the HDF5, so that the placement of these stacks could be changed without modifying the underlying image data. The question is now, how to go about this. Conceptually, I suppose each stack would best fit into the “sources” category, which would give incredible flexibility, but is it reasonable to attempt to work with 69300*3=48600 sources? Or should that rather be handled at a lower level (e.g. a new CacheArrayLoader implementation)?

Thanks,
Christian

1 Like

Hi @cniedwor,

just out of curiosity, this STP system you are referring to, is this a TissueCyte or something other (possibly home-built)?

Cheers
Niko

Hi Niko,

it is indeed a TissueCyte system.

Cheers,
Christian

1 Like

Hi @cniedwor,

no, 48600 sources will definitely not work. It will be far too slow to render, because to render a pixel, values from all visible sources (visible in the sense of being enabled) are combined. For each rendered output pixel, a value for each source will be computed with Imglib2 view extension taking care of determining wether the voxel falls out-of-bounds or whether there is an actual source value. But this computation is repeated over and over per rendered pixel, so it will be extremely slow.
Also the Brightness and Visibility dialogs will be unusable because they are not intented to display 48600 checkboxes…

I think the other way is promising. You could create new CacheArrayLoader that renders (block-wise) the stitched source.
You could still expose the raw unstitched data as separate sources, the new CacheArrayLoader could read data from these “raw sources”. For the raw sources I would place stacks for each channel next to each other without stitching, so you end up with 3 sources of 9600x14400x3000.

Depending of how complicated the transformations are, you could store them a custom extendsion of either the XML (would probably work for one affine per mosaic stack) or the HDF5 (if you have elastic transformations defined by matching support points or something like that).

best regards,
Tobias