BigVolumeViewer Tech Demo

imglib2
imagej
bigdataviewer
sciview

#1

Hi,

In the recent 3 months, since the @kephale’s SciView Hackathon I have been trying to get GPU volume rendering working for Biiiiiigggg volumes (i.e., too large to fit into RAM, and certainly too large to fit on the GPU).
This is now in a state, where I’m comfortable with letting people play with it.
(In reality, I’m going on holiday tomorrow, and I wanted to push this out before…)

I just uploaded it to update site http://sites.imagej.net/BigVolumeViewer/

(Source code is here if you are really interested: https://github.com/tpietzsch/jogl-minimal
It is still not properly cleaned up. When it is, it will be split into multiple repos and moved to the appropriate places…)

Anyway:
If you add Fiji update site http://sites.imagej.net/BigVolumeViewer/ and update, you will then have a Plugins > BigDataViewer > Volume Rendering Tech Demo menu entry.
That opens the following dialog:


In there just select the XML file of a BigDataViewer XML/HDF5 dataset and leave everything unchanged.
(Backends besides HDF5 will also work, but data must be 16bit, and if it’s not multi-resolution, it will be painful.)
Click ok.
Then something like this should pop up:

This is basically a BigDataViewer window, but doing (Max Intensity Projection) volume rendering using OpenGL.
Most of the BDV mouse and key actions work. Navigation can be a bit confusing, because you don’t see where the “screen plane” is, but still navigation is centered around that. “R” resets transform to initial.
“1”, “2”, “3” etc to switch channels, “F” to toggle fused/single channel mode, “shift 1”, “shift 2”, etc to show/hide channels in fused mode.

BDV brightness and visibility dialogs also work (“S”, “F6”):

This is all still badly hacked together, but it would be great if some of you could try it!!!
In particular, I would be interested in OS/hardware configurations that don’t work. (This uses mostly ancient OpenGL stuff that should really work everywhere, but you know how it is…)

Some notes on the other settings in the initial “Volume Rendering Tech Demo” dialog.

  • Render width/height: Stuff is rendered to a offscreen surface of this size, and then scaled up to fill the window.
  • Dithering: Especially if multiple sources are rendered at the same time, things can get slooooow (at least on my Macbook GPU…) if rendered at full resolution. Dither window size “4x4” means: draw only one pixel in each 4x4 window. Then if there is time left, draw another pixel in each 4x4 window, then another, until target time is up. Interpolate the rest. Continue in the next frame, until all 4x4 pixels have been drawn. Number of dither samples: Pixels are interpolated from this many nearest neighbors when dithering. This is not very expensive – turn it up to 8.
    Dithering is a two-edged sword. Although it is a lot faster to draw only every 16th pixel, iterating this until all 16 are filled is a lot slower than rendering them all in the first place. My explanation is that still in each iteration you touch enough texture data from all over the place to make caches less efficient… (?)
    So maybe if you have a decent GPU, you don’t need it…
  • GPU cache size (in MB) It helps to turn this up as much as possible, obviously… Depends on how much memory your GPU has. For example, my GPU has 1GB, so I can go maybe up to 600 MB, but not more (the OS and other programs share need some of that memory too!)

The rest is not so interesting.

Future plans:

  • I’m working with @skalarproduktraum to integrate this into scenery and @kephale’s SciView. This will probably be the main way that most users will access it. This is already underway. We had a session today with @skalarproduktraum, and we are optimistic… This will happen very soon.
  • At the moment the type of imglib2 data you can throw at it is limited. It has to be UnsignedShortType, and it has to come from CellImgs. So a lot of stuff that BigDataViewer (and bdv-vistools) can do, this cannot do yet.Eventually, I want every reasonable imglib2 type to work with this. Also it needs to pick up on changes in the images and update the data in the GPU cache accordingly. The dream is, that this will simply work out of the box in bdv-vistools, without any changes to existing code. There will be just a shortcut to switch to volume mode, and thats that… But this is at least one year away, I would say.

I almost forgot the most important shortcuts:
“B” adds a textured box in a random location.
“shift B” removes a random existing box.


Have fun … :slight_smile:

best regards,
Tobias


#2

Cheers, and fantastic work. Clearly I’m particularly excited about the scenery and SciView integration.


#3

One thing I forgot:
The next two items on top of the TODO are

  • Proper emission-absorption model instead of max intensity. This should be very easy, but I ran out of time…
  • Priming the BigDataViewer cache. Data can only be uploaded to the GPU when it is first in the RAM cache of BigDataViewer. BigDataViewer does a much better job at this, not showing you black images when going to a new timepoint etc. This is a matter of requesting the right data in the right order, so that at least low-resolution versions are there when the GPU starts rendering. This should also be not too difficult, but again… I ran out of time…

#4

@tpietzsch, you are a hero. You should get a shrine (somewhere on a beach in Fiji maybe?). I’m serious… thank you so much for all you did do, do do, and hopefullly will continue to do for many years to come.


#5

@tpietzsch - thanks a lot! :slight_smile:

Now it’s so easy to visualize APR datasets. I have tried your suggestions and here it is, visualization of 7GB APR in your Volume Rendering magic stuff (powered by APRImgLoader done at our small hackathon):


#6

That looks awesome, thanks! Is it supporting dynamic datasets already ? I tried with a BDV file and got the following errors. Happy to test further.

Shader status invalid: ERROR: 0:33: 'im__5__' : two consecutive underscores are reserved for future use 
ERROR: 0:34: 'sourcemin__6__' : two consecutive underscores are reserved for future use 
ERROR: 0:35: 'sourcemax__7__' : two consecutive underscores are reserved for future use 
ERROR: 0:46: 'lutSampler__9__' : two consecutive underscores are reserved for future use 
ERROR: 0:47: 'blockScales__10__' : two consecutive underscores are reserved for future use 
ERROR: 0:48: 'lutScale__11__' : two consecutive underscores are reserved for future use 
ERROR: 0:49: 'lutOffset__12__' : two consecutive underscores are reserved for future use 
ERROR: 0:65: 'offset__15__' : two consecutive underscores are reserved for future use 
ERROR: 0:66: 'scale__16__' : two consecutive underscores are reserved for future use 
ERROR: 0:109: 'vis__4__' : two consecutive underscores are reserved for future use 


Shader status invalid: ERROR: 0:33: 'im__52__' : two consecutive underscores are reserved for future use 
ERROR: 0:34: 'sourcemin__53__' : two consecutive underscores are reserved for future use 
ERROR: 0:35: 'sourcemax__54__' : two consecutive underscores are reserved for future use 
ERROR: 0:46: 'lutSampler__56__' : two consecutive underscores are reserved for future use 
ERROR: 0:47: 'blockScales__57__' : two consecutive underscores are reserved for future use 
ERROR: 0:48: 'lutScale__58__' : two consecutive underscores are reserved for future use 
ERROR: 0:49: 'lutOffset__59__' : two consecutive underscores are reserved for future use 
ERROR: 0:65: 'offset__62__' : two consecutive underscores are reserved for future use 
ERROR: 0:66: 'scale__63__' : two consecutive underscores are reserved for future use 
ERROR: 0:109: 'vis__51__' : two consecutive underscores are reserved for future use

#7

Hi,
interesting… I build shaders from snippets, where a snippet can occur multiple times, and use using __{number}__ suffixes to separate snippet instances. I didn’t know that __ could be problematic.

It should be an easy fix, but I don’t have access to a computer at the moment. If you want to try, just fix these two patterns:



To some other pattern, e.g., "%s_x_%d_x_".
That should do it…


#8

If by “dynamic” you mean time points, then: yes.
Multi-angle, multi-channel, and time-series are all supported


#9

And that fixed it! This is brilliant, I confirm 3D+Time and channel works great.Thanks a ton!

How easy would it be to implement vector data (spheres, tubes or line) ?


#10

Vector data is something that I hope will be taken care of by scenery/sciview.

I’m still hoping for @skalarproduktraum to expose a OpenGL renderer constructor so that I can easily build it with my jogl OpenGL context.

Later might go through sciview, which is focused on meshes etc. One could simply add the bdv-volumes as a scene object there, together with other stuff.

Immediately, if you want to write OpenGL code yourself, everything that happens in between sceneBuf.bind( gl ); and sceneBuf.unbind( gl, false ); here


is the “scene” data, where you would put vectors etc. At the moment, I do not plan to put anything full-featured there myself, instead hoping to provide a scenery integration example soon.


#11

Thanks again, I will look into it as well as SciView capacities (I am just starting to get familiar with the various java-based rendering efforts). Do you know if SciView will support the BDV HDF5 format ?


#12

I’m pretty sure it will.


#13

Yes, that is the plan. We already support images that is already open in ImageJ, but I have created an issue for adding BDV HDF5 support.


#14

The fix for this is pushed to the update site now. So if you had the same errors too, you might update and try again now.


#15

Fixed another bug, where you would see a IndexOutOfBoundsException if you made a largish GPU cache.
So, if you saw that, you could also update and try again now.


#16

Fantastic! Works great for me!

I have one question in terms of future plans:
One of our use cases would be manual corrections of 3D segmentation.
A simple example would be a missing segmentation.

A great workflow could be:

  • See the data in BigVolumeViewer as maximum projection
  • Click in BVV at a pixel (I assume BVV knows the 3D coordinate of the maximum intensity that it shows?)
  • BigDataViewer would zoom into this 3D location in 2D-slicing mode
  • The user could draw the missing segmentation in 2D slicing mode

Do you think this would be possible?
I don’t know, but maybe it is actually easy, because it might be enough just to be able to get the 3D position of the displayed maximum intensity in BVV upon a mouse click (and then somehow feeding this coordinate to BDV would be code that one could just write already, is it.)?