Which HDF5 export method for MIB?

Hello,

New MIB user, I discover all the features little by little.
Currently working on a very heavy SBF-SEM image stack (more than 2000 z-images) I would like to be able to open it in virtual stacking because I don’t have enough RAM.

I have already managed to open the virtual stack on imageJ to convert it to HDF5 using the plugin BigDataViewer > Export Current Image as XML/HDF5. But when I switch to virtual stacking mode on MIB and try to import the previously created file, I have a table that opens and asks me to choose a data. I never managed to open the stack. I think I export the stack in HDF5 badly.

My question is the following:
Which method should I use to export my stack correctly?

Thank you in advance for your answers.

Lucien

Hi Lucien,
thank you for the interest and welcome to the forum!
Unfortunately, you won’t be able to do efficient segmentation in the virtual stacking mode. That mode is mostly for viewing the data and most segmentation tools are not available there.
The best solution is to chop the dataset into several subsets (MIB->Menu->Chopped images->Export) or crop out the areas of interest (Menu->Dataset->Crop) and work with them one-by-one assembling result as one common stack or visualizing them as separate entities in a common view (for example, we do that in Amira).
Considering HDF5, MIB can open two types of HDF5 datasets: from Ilastik and from BigDataviewer. In the virtual mode, BigDataviewer-xml file can be opened when Bio-checkbox is ticked.

The following window is allowing you to select one of binned sets inside HDF5 container.

Note that BigDataviewer.h5 file is in 16 bit format, which is not optimal, but as I understand it is not possible to have unsigned 8-bits in Java, so it may be better to generate HDF5 with Ilastik.
It is better to convert images to uint8 to work faster and with less memory requirement.

Ilya

Thank you very much for your prompt response!

I watched your video on the subject. Indeed, you specify that segmentation tools and image filters are not available in virtual stacking mode.
I currently want to use the contrast normalization function on the whole stack. But probably this feature is not available in virtual stacking mode either.

Concerning the export, I also tried to convert the stack with the Ilastik plugin but without success. I always got an error message about the length (the set I’m trying to process weighs 120Gb).

I just tried again to open the -xml file of BigDataviewer with Bio-checkbox ticked. I got the selection window, but as soon as I try to open a set I get this error message :

Index exceeds the number of array elements (5).
Error in mibUpdatePixSizeAndResolution (line 94)
Error in mibLoadImages (line 154)
Error in mibController/mibFilesListbox_Callback (line 67)
Error in mibGUI>mibFilesListbox_Callback (line 673)
Error in gui_mainfcn (line 95)
Error in mibGUI (line 42)
Error in matlab.graphics.internal.figfile.FigFile/read>@(hObject,eventdata)mibGUI(‘mibFilesListbox_Callback’,hObject,eventdata,guidata(hObject))
Error using pause (line 20)
Error while evaluating UIControl Callback.

For the segmentation I plan, as you advise me, to fragment the set.

Thank you,

Lucien

As you probably have your original images as plain 2D slices you can still do contrast normalization. In the standard mode the contrast normalized to an average values of the whole stack, but you can open several representative slices and do contrast normalization for them. After the procedure, you will see calculated values for mean and std, after that you can use those values in the “Manual” mode. This will work faster.
On the practical side, you can load portion by portion of your stack, do contrast normalization, convert to 8bit (if needed) and resave. The other solution is to process file-by-file using the batch processing mode (Menu->File->Batch processing). You can create there a FILE LOOP that will process all files one after another.

I just tried again to open the -xml file of BigDataviewer with Bio-checkbox ticked. I got the selection window, but as soon as I try to open a set I get this error message :
Index exceeds the number of array elements (5).
Error in mibUpdatePixSizeAndResolution (line 94)

When MIB opens images, it tries to read voxel size and it might be that for that particular dataset voxels were not defined (if it is ever possible) or have somewhat wrong units.
What kind of information do you have in xml file under size tag as for example shown in this example:

<voxelSize>
          <unit>micron</unit>
          <size>0.014019273399014778 0.014019322799097067 0.030000000000000002</size>
</voxelSize>

Thank you very much for your contrast standardization solution. I will try this !

After verification, my dataset has a corrupted image. I will try again to export the stack in HDF5 format without the problematic image.

Here is the information in the xml file:

        <size>2048 1365 2447</size>
        <voxelSize>
          <unit>pixel</unit>
          <size>1.0 1.0 1.0</size>
        </voxelSize>

Try first with small dataset to check the workflow.

This way dataset is not calibrated, imo it is better to keep dataset with correct voxel sizes, this will make less confusion in future.
I recommend to change <size>1.0 1.0 1.0</size> to <size>x y z</size>, where x, y, z are resolution of the dataset, as well pixel to micron. This way, the dataset will be calibrated and you will get correct quantification whenever you may need it.