Opening hdf5 file from ilastik in Fiji

Dear All,

We are using Fiji to open Ilastik probability maps, and are having trouble if the h5 files are “too” large.

Indeed, using the HDF5 plugin we can open 1GB, 6GB files but we are blocked with 40GB files with a Length is too large Exception.
In Fiji our Maximum Memory is set at 120000 MB computer (it’s a big one)

Does anyone now if there is an Option during the Ilastik Export that we have to select in order to circumvent this issue?

Many thanks for your help,



Unfortunately I don’t have an answer; just something to suggest checking. I know it might sound stupid to ask but are you sure you have the memory available? Just because Fiji is permitted to use 120 GB doesn’t mean 120 GB are available for use.

This is one of the things we built n5-(hdf5) for. Since it’s not in the public repos, you will have to do some compiling, but it’s simple enough:

You need maven, java-openjdk-8 or 10, or oracle java 8 or 10. Then check out the following repositories from from n5, n5-hdf5, n5-imglib2, and may be n5-ij but that would not help you here because the data is too large for a memory copy. Then build each of them them into your local Fiji installation:

mvn clean install

This makes the API available in Fiji. In Fiji itself, you open the script editor, select the fantastic Beanshell language, and run:

import net.imglib2.img.display.imagej.*;
import org.janelia.saalfeldlab.n5.*;
import org.janelia.saalfeldlab.n5.hdf5.*;
import org.janelia.saalfeldlab.n5.imglib2.*;
n5 = new N5HDF5Reader("ilastik-predictions.hdf5", new int[]{16,16,16});
img =, "/dataset");;

May be we should go ahead and ship this stuff by default…


Thanks for the message! I’m a colleague of Romain and I tried your solution. We made some progresses but we are stuck now.

First, a few information:

  • when installing n5-ij, ImageJ is not able to start. I see a window that disappears almost instantaneously. I’m on windows 10 64 bits and unfortunately there’s no error message popping up, even when launching in command line. So I did not install this module.
  • when installing n5-hdf5, the tests are failing ( N5HDF5Test>AbstractN5Test.testCreateGroup:124 Group does not exist). I skip them by using the -DskipTests option.

Then we tried to run your script (obviously replacing “ilastik-predictions.hdf5”), but we get this error message:

Started opentest.bsh at Mon Sep 10 11:08:01 CEST 2018
Sourced file: inline evaluation of: ``import net.imglib2.img.display.imagej.*; import org.janelia.saalfeldlab.n5.*; im . . . '' : Method Invocation : at Line: 8 : in file: inline evaluation of: ``import net.imglib2.img.display.imagej.*; import org.janelia.saalfeldlab.n5.*; im . . . '' : N5Utils .open ( n5 , "/dataset" ) 

Target exception: ncsa.hdf.hdf5lib.exceptions.HDF5SymbolTableException: Symbol table:Object not found ["..\..\src\H5Gloc.c line 385 in H5G_loc_find_cb(): object 'dataset' doesn't exist

Maybe that’s something really obvious that I’m missing ? What does the dataset folder refer to ? Also, what’s the meaning of new int[]{16,16,16} ?


I’ve updated FIJI and changed “/dataset” to “/exported_data”. And it looks like it works ! Just is there a way to specify the order: xzyc or xyzc etc… ?

I am glad you made it. Some plumbing required, as I predicted ;).
“/dataset” was obviously a placeholder for the actual dataset, I assumed that you will figure that one out.

XYZ or whatever order:

Use ImgLib2 views to apply whatever transformation you have in mind. Let’s assume that “/exported_data” is stored in xyzc order and you want to map it to ImageJ friendly xycz:

import net.imglib2.view.*;
import net.imglib2.img.display.imagej.*;
import org.janelia.saalfeldlab.n5.*;
import org.janelia.saalfeldlab.n5.hdf5.*;
import org.janelia.saalfeldlab.n5.imglib2.*;
n5 = new N5HDF5Reader("ilastik-predictions.hdf5", new int[]{16,16,16});
img =, "/dataset");
xycz = Views.permute(img, 2, 3);;

Modify as needed.

1 Like

Perfect! That works ‘almost’ perfectly…

Just the display is outrageously slow when the image is oriented in the correct direction (i.e. not a 3500x3 pixel image). That’s a bit weird because a 6GB dataset displays correctly while the 40 GB dataset is ‘unworkable’. The display speed difference do not seem to scale with the number of pixel shown.

Thanks to your link I tried to subsample a lot by using:

subs = Views.subsample(img,1,steps,steps,steps);

But it’s like there’s no speed up, maybe because the read access is bad when subsampling.

Anyway it’s great because now at least we can open the image. What we plan to do next is to open the image as previously in a virtual stack, and then duplicate the image to get it fully in ram.

We have maybe just a tiny last question: is there a way to open the image in RAM directly ? I had a look at ImageJFunctions, but I couldn’t find a fitting command. I tried ImageJFunctions.copyToImagePlus but it returned an error No signature of method: static net.imglib2.img.display.imagej.ImageJFunctions.copyToImagePlus() is applicable for argument types: (net.imglib2.view.IntervalView).

In general, if you have any hint or advice to speed up the opening and displaying this sort of image. We’d be happy to get it!

Thanks a lot for the support @axtimwalde!

It will probably help to set the default block size to something more reasonable and appropriate for your image than {16,16,16}. May be {64,64,64,-1} if xyzc is your incoming order? I know too little about your data.
The displayed stack is already a virtual stack, shift+d would attempt to copy that (or a ROI) into an in-memory image. Or but you said it’s too big for RAM?

For more rapid display, you should look at N5Utils.openVolatile in combination with BDVvistools as used e.g. here:

Or if you only care about launching an interactive viewer to look at stuff from the terminal, check out n5-utils.

Oh, may be set your z-block-size to 1 and make the xy size bigger, e.g. for a dataset in xyzc order {256,256,1,-1}, that could also help speeding up things. It’s currently loading 16 sections at once, that takes some time for a 3500px wide image.
Subsampling will not change the speed because you still have to load everything which is the time consuming operation here.

BTW, I mean the block size that the N5-HDF5 backend is using to load the data, not the block-size in the HDF5 file :).

n5 = new N5HDF5Reader("ilastik-predictions.hdf5", new int[]{256,256,1,-1});

Could we make any of this easier by changes on the ilastik side? Different chunk size or some other hdf5 option?


Hi @ilastik_team,

I’ll be interested to do so, my original post was about finding an ilastik solution :wink:

So far I found how to make use of “transpose to axis order” to make the hdf5 compatible with the HDF5 plugin and its macro command.
I don’t know (yet) how to change chunk size from the ilastik window, nor find hdf5 options, see the ilastik “Image export options” window below

I realize (just now sorry) that there is ailastik hdf5 reader available

It’s already in my list of FIJI update site, but the import hdf5 is not available.


For the ilastik hdf5 reader, install the ilastik plugin, there will be a sub-menu to select if you want to import, export, predict, etc.

Oups :sweat_smile: I was following the documentation but it seems it need to be updated.

You can submit pull requests to

The file to update is here:

1 Like

done it’s been out of date for too long :wink:


Thank you @k-dominik for updating the documentation!

So I managed to open even the larger files BUT in order to automate the process I have another issue.
The macro recorder only records :
run("Import HDF5", "hdf5filename="+path+"\\cr170223_A_idC_bigcrop_lnmc_Probs.h5");
The intermediate window that asks for dimensions order is not recorded.

It would be nice if we could specify the order of the dimensions in the same command so we do not have to manually enter it.



This sounds like a good idea @wolny, do you think this can be done easily?


Currently, the plugin doesn’t have a proper separation of concerns, i.e. it mixes functionality with user interface by showing a dialog in a method called from its run() method:

Ideally, you’d have a Command that defines all possible input @Parameters (including datasetPath and dimensionOrder), so the SciJava framework can take care of harvesting these parameters from the user input (by showing a dialog, or, when headless, getting them e.g. from the command line or macro command).

@romainGuiet when using any other language than IJ1 macros (e.g. Groovy or Python), you can call into the API of Hdf5DataSetReader in the same way as this plugin currently does:


yeah it’s been an easy fix. I’ve included the fix in the latest release of the plugin (v1.3.0). Also see the README for a sample usage of the plugin within a macro:
@romainGuiet please update the plugin in your Fiji installation and follow examples in the README in order to use the plugin’s commands from within Fiji macros.