BigVolumeViewer "vistools"


recently, I have been working to make BigVolumeViewer support more ImgLib2 data structures and pixel types.

Now, simple (small-ish) volumes are supported in addition to the large, GPU-cached multi-resolution stacks. Supported is any RandomAccessibleInterval of UnsignedByteType, UnsignedShortType, and ARGBType (as long as it fits in GPU memory). Multiple small and large volumes of different data types can shown at the same time, possibly transformed with respect to each other, and everything should be blended correctly. Some basic VolatileViews are handled correctly.

This was done in preparation for scenery/SciView pushing all volumes through a common pipeline, so this should appear in SciView very soon.

To play around, I expose a bigdataviewer-vistools-like API. Basically, in, replace Bdv by Bvv and most basic things should work. A lot of copy & paste for which I will hate myself later…, but it’s sooo convenient now. If you clone you can have a look at the examples.
Example01 shows ImageJ’s 16-bit “t1-head” sample.

final ImagePlus imp = IJ.openImage( "" );
final Img<UnsignedShortType> img = ImageJFunctions.wrapShort( imp );
final BvvSource source = img, "t1-head" );

Example02 does the same for (RGB) “flybrain” sample.
Example03 shows 2-channel time-series “mitosis” sample.
Example04 shows how to add multiple volumes to the same window (Bvv.options().addTo(...), analogous to vistools) and that you can transform them using Views.translate or specifying a source transform.

Example05 shows how to add SpimData datasets loaded from bdv xml files.

Now it get’s really cool! Example06 illustrates that CachedCellImgs wrapped as VolatileViews are correctly recognized and handled as tiled, GPU-cached, multi-resolution stacks (with 1 resolution level).

What we see here is 5 sources. On the left, the flybrain RGB volume.
In the middle the red channel, virtually extracted using Converters.convert.
On top of that two Gauss-smoothed versions (in green and blue) that are lazily evaluated by CellLoaders of a CachedCellImg. These are added wrapped in VolatileViews

final BvvStackSource<?> sourceGauss1 = VolatileViews.wrapAsVolatile( gauss1 ),
		"gauss1", Bvv.options().sourceTransform( transform ).addTo( sourceRed ) );

and render immediately while missing tiles are requested and uploaded to the GPU cache when they become available.
On the right is the absolute difference between the two Gauss-smoothed versions (also lazily computed).
(If you run the example, note that the smoothing is slowed down artificially, so that you can see how the tiles come in…)

Example07 shows a cube in a head :slight_smile:

One thing to note: Once your RAI uploads to a texture, changes to the RAI will not be reflected, obviously.
You need to explicitly trigger re-upload using the BvvStackSource.invalidate() method.

I’m currently having problem releasing maven artefacts, once that is sorted out, I’ll make a release so that you can easily integrate BvvFunctions.

Have fun,


This is insanely important… maybe a small step for Tobi, but certainly a BIG step for all of us!!!
I’m sooo happy!


Dang… I’m running into problems. :confused:

Clone + Project import + Open ‘’ + Run ==> PKIX path building failed: unable to find valid certification path to requested target

Exception in thread "main" java.lang.NullPointerException
at net.imglib2.img.ImagePlusAdapter.wrapShort(
at net.imglib2.img.display.imagej.ImageJFunctions.wrapShort(
at bvv.examples.Example01.main(

The loaded image imp seems to be null.

Easy fixing attempt, download the sample image, save on my desktop and open via…
final ImagePlus imp = IJ.openImage( "/Users/jug/Desktop/head.tif" );

Now imp is not null, but I run into another Exception:

Exception in thread "main" java.lang.NullPointerException
	at bdv.util.AxisOrder.getAxisOrder(
	at bvv.examples.Example01.main(

Seems that this time the variable space in AxisOrder::getAxisOrder(...) is null.

By no means I want to rule out that I do something stupid… so if you see what it is, please let me know.


looks like some secure connection problem? Could you replace https in the image url by http and see whether that works. Very strange though. What is your java version?

space is img. That means that the ImageJFunctions.wrapShort( imp ); went wrong. Could you verify that your "/Users/jug/Desktop/head.tif" is indeed a 16-bit image?

And, thanks for trying it out, by the way!

Ha! That surely solved it!

  1. Replacing https by http does the trick. I ran this all with Java 1.8.0_25.
  2. Resaving the downloaded T1_head.tif as 16 bit image also fixes my attempted fix. That was actually quite stupid and yes, I feel slightly embarrassed to not have seen that right away… :wink:

Thanks for the speedy and spot-on help. Now I am in danger of procrastinating by rotating beautifully rendered volumes around on my screen for hours on end… :smiley:


PS (for some later point in time, e.g. next week): could you share some of the magic keyboard shortcuts I see you use so proficiently? Plus: is it possible to change the near and far clipping planes, so the volume does not get cut when I zoom out / more of it gets displayed in front of the ‘on-screen’ plane?

1 Like

It’s all BigDataViewer shortcuts, so almost everything explained on will work. In particular, “S” for brightness/color dialog, “F6” for visibility dialog should be helpful.

Yes, have a look at BvvOptions. In particular these options:

And lots of other rendering-related and unrelated stuff can be tweaked there. Most of it even javadoc-ed :slight_smile:

That’s probably too old for the https certificates to work

1 Like

wow, that is really big!
Can we use any mesh? Like, 3ddata -> marching cubes -> display?

Not conveniently, no. You can run arbitrary OpenGL code using JOGL, like it is done here:

Just render anything and (if you leave the depth buffer enabled), it will correctly blend with the volumes. However, you have to do everything yourself. There is no plan to add support for meshes etc, except maybe something extremely minimal (e.g., paint spheres and lines, but no lighting or anything fancy)

For rendering meshes, you should look at SciView. As mentioned above, the BigVolumeViewer volume rendering pipeline should be integrated there very soon.

An alternative that I would find very convenient would be for scenery to expose some API where you could just call it to render into the current framebuffer, so that you could just plug it into viewer.setRenderScene(). If someone wants to look into that, would be cool :slight_smile:


Hi @tpietzsch,

Awesome! Worked very nicely for me! Just make sure I understand the scope: “Volume renderings” like shown below are currently not possible with BVV, right?


No, not currently possible. It should be easy to add, but time … :slight_smile:

1 Like

Really? That would be sooooooooo amaaaaazing :slight_smile:
But I understand your time issue all too well, same here…

Love playing around with this! There are two things I am missing which exist for BDV:

  • A panel which I can instantiate without data and later add sources:
    BdvHandlePanel( final Frame dialogOwner, final BdvOptions options )
  • An option for how multiple source values are accumulated:

Let me know if / how I can help create this or if there are reasons why this does not work as in BDV.

1 Like

This should be relatively easy to add. The Bvv... classes are mostly copy&pasted from the corresponding Bdv... classes in vistools. If you want to give it a try, I’m happy to help!

This is decidedly harder. Because the blending is done in generated GLSL code.
There are two places where blending is happening:

  • blending pixels from overlapping volumes to get the combined value at a particular 3D coordinate, and
  • blending pixels along the ray cast through each (2D) pixel to assemble the combined values along the ray into the final screen color.
    The GLSL is assembled from template snippets because it needs to be adaptable to changing number and type of sources. Almost nothing of this machinery is exposed through API to configure.

It should be definitely possible, but this is a much bigger project than your first issue. My vague plan was that you would have automatically discoverable (through annotations) companion classes for, e.g., a Converter<T, ARGBType> implementation that works in BDV. The companion class would know how to build an equivalent converter in GLSL, which values to harvest from a given instance of the converter, how to transfer them to corresponding snippet variable instance, etc. Configuring source accumulation is a similar problem. So this would be ideally about figuring out a general solution to making the shader generation extensible. Let’s talk about it…

1 Like