Force Caching of all timepoints for small regions

Dear BDV experts,

Is there a method that forces bdv to cache all timepoints for the current region?

background:
We have added a bdv viewer into our project using bigdataviewer-vistools. The viewer allows us to quickly jump to small subregions of 2D image stacks to look at the dynamics of single fluorescent molecules. Overall, the viewer is awesome. Provides great navigation options, hot keys etc. and has helped us out a lot.

Our only problem is that the video playback when moving the slider at the bottom updates very slowly. Usually we are only looking at 100 pixel by 100 pixel regions so fitting all timepoints in memory shouldn’t be an issue.

Ideally, we would like to call a method that forces caching of all timepoints once we have set the view to a small 100 x 100 pixel subregion. Then only the pixels in that subregion would be cached for all timepoints. We already have implemented a goto method, so if there is a simple way to force caching we could just drop it in there.

Thanks @Christian_Tischer @tpietzsch for very helpful topics discussing bdv implementations. Unfortunately, I don’t think this has been addressed, but please let me know if I missed it.

Most of the code for our viewer lives here:

2 Likes

There is a Prefetcher class, that loads (resp. triggers loading) data into the cache.

This can be a starting point to make you want.
Currently it is use from MultiResolutionRenderer


Note that the above prefetching works on blocks of the data, which depends how the data is stored…
With HDF5 resolution pyramids, you have small blocks, and everything should work.
If you have your data in tifs, slice-wise, or stack-wise, then the block would be a whole slice/stack, and the prefetching wouldn’t do much good (“updates very slowly” kind of makes me assume that this is the case for your data?) In this case, you would need to put addition CachedCellImg with smaller blocks on top of your source.

3 Likes

We export our tif stacks to xml and h5 format using the BigDataViewer export and we only work with the xml/h5 format loaded as SpimDataMinimal, so it should be able to load small chucks efficiently.

Thanks for the very helpful links. The Prefetcher seems like just the thing, but I am not sure if I have implemented it the right way. Is the idea to just call the prefetch method directly as needed to load all the required data into the cache or do you think more extensive coding would be required? Like making my own implementation of the MultiResolutionRenderer.

I assumed just calling the prefetch method would be sufficient. Do I understand correctly that the prefetch method has to be called separately for each time point? I assumed this was the case and copied code fragments to build a running version. To cache all timepoints, I call the prefetch method for all timepoints and all sources once the viewer is set to a small region. Updated code is here. The stuff at the bottom of the goTo method is most relevant:

If I work with videos (xml/h5) on my computer the performance is great even without caching. We were hoping it would help with videos stored on our network. Unfortunately, caching seems to be very slow and once it is done, the video still doesn’t seem to be entirely loaded. I have been testing using different LoadingStrategy options. I have tried setting it to BLOCKING to try and confirm caching is working.

I am assuming rendering isn’t the problem because locally I don’t have any issues, so I think it must be just loading the data into the cache.

I will continue testing, but I wanted to check back about whether I have the right strategy or you had something else in mind. Thanks a lot for your help and for making this awesome viewer!

1 Like

Sounds all good, that’s exactly what I had in mind.

Yes, correct.
And also for each scale-level that might be needed in rendering.
The latter might be one reason that

You can get the highest needed resolution level (smallest index) and the coarsest (highest index)
using something like this

final int bestLevel = MipmapTransforms.getBestMipMapLevel( screenTransform, source, timepoint );
final int numMipmapLevels = source.getNumMipmapLevels();
final int maxLevel = numMipmapLevels - 1;

and then iterate all prefetch for levels bestLevel … maxLevel.

With slow network transfer, I have no idea how to fix it. If you have access to the other end, you could set up a BigDataServer on the computer where the data is located. Or maybe storing the data as N5 (currently under construction but almost done) will help. (But these are just guesses)

1 Like