DeepMIB - user-friendly tool for training of deep learning network for biological image segmentation

Dear colleagues,
I am glad to announce a new release of MIB (version 2.70) that features DeepMIB - a new tool for automatic image segmentation using convolutional neural network.
With DeepMIB you can train 2D (Unet, SegNet) and 3D (Unets for isotropic and anisotropic voxels) CNNs for segmentation of light or electron microscopy datasets.
To use it, download the recent version of MIB for Matlab or standalone (if you do not have Matlab) and start it from Menu->Tools->Deep learning segmentation

2D/3D EM/LM examples:

  1. 2D EM dataset: Segmentation of membranes (Serial section Transmission Electron Microscopy dataset of the Drosophila first instar larva ventral nerve cord)


    Movie:
    https://www.biorxiv.org/content/biorxiv/early/2020/07/14/2020.07.13.200105/DC2/embed/media-2.mp4?download=true

  2. 2D LM dataset: Segmentation of nuclei (blue), their boundaries (yellow) and interfaces between adjacent nuclei (red) for random images from a high-throughput screen on human cultured osteocarcinoma U2OS cells (BBBC022 dataset, Broad BioImage Benchmark Collection)


    Movie:
    https://www.biorxiv.org/content/biorxiv/early/2020/07/14/2020.07.13.200105/DC3/embed/media-3.mp4?download=true

  3. 3D EM dataset: Segmentation of mitochondria from the full focused ion beam scanning electron microscopy dataset of the CA1 hippocampus region.


    Movie:
    https://www.biorxiv.org/content/biorxiv/early/2020/07/14/2020.07.13.200105/DC4/embed/media-4.mp4?download=true

  4. 3D LM dataset: Segmentation of inner hair cells and ribbon synapses from mouse inner ear cochlea


    Movie:
    https://www.biorxiv.org/content/biorxiv/early/2020/07/14/2020.07.13.200105/DC5/embed/media-5.mp4?download=true

A manuscript describing DeepMIB is available from bioRxiv. It is supplemented with datasets, configs and networks used to generate the examples from above.

How to use

Release notes

  • Added DeepMIB for training and prediction of datasets using deep convolutional networks ( Menu->Tools->Deep learning segmentation )
  • Added 2D Elastic Distortion filter ( Menu->Image->Image Filters )
  • Added resizing of the Image Arithmetics window
  • Added selection of a seed for random generator for Rename and Shuffle tool
  • Fixed issues with importing of chopped cropped datasets

Best regards,
Ilya

8 Likes

Great job! This was something that I was looking for a long time! I got a question, how the GPU affect the speed of segmentation? Soon I will have a workstation with a 2x RTX 2080Ti in SLI mode, does MiB use this feature? What about RT and Tensor cores? did you plan to use them in a software to speed up segmentation?

Once again great job!

Karol

Hi Karol,
thank you!

how the GPU affect the speed of segmentation

you can do training and prediction without GPU but it is slow and can only “recommended” to a person who knows what he/she is doing. For inexperienced person it will be significant waste of time, as the person needs to play around with settings and parameters. Waiting for hours to see that training fails due to a wrong setting is not worth it. It is best to invest in a decent GPU.
Here is a comparison of CPU vs GPU but for completely different task:

Soon I will have a workstation with a 2x RTX 2080Ti in SLI mode, does MiB use this feature

The released version selects the first GPU available on the system, but the current beta is already allowing to choose which GPU to use. SLI mode should also work (there are parameters to enable it) but it is not implemented as I can’t really test it. Let me know, when you get the workstation and we can enable that.

What about RT and Tensor cores?

well, can’t answer that yet; it is the first experience for me to code deep-learning :] I am using whatever Matlab can offer but according to an answer on the community page (https://se.mathworks.com/matlabcentral/answers/468516-tensor-cores-and-deep-learning)
the tensor cores are not yet supported.

did you plan to use them in a software to speed up segmentation?

so it is work in progress, there are still few other things to add, i.e. clusters, transfer learning, import of other networks. Will see… :wink:

Best regards,
Ilya

2 Likes

Thank you for your answers :slight_smile: If you will need somebody to help you testing new features please let me now I will be glad to help!

Best
Karol

@KarolM,
Great!
It will probably take less than few minutes to code that, but I just can’t test it. So, when you get the workstation let me know.
Ilya

npntz3@mail.missouri.edu

Vào Th 4, 15 thg 7, 2020 vào lúc 03:16 Ilya Belevich (MIB) via Image.sc Forum <imagej@discoursemail.com> đã viết:

Hello Ilya,

that is a very welcome addition indeed. Thank you for your continous effort.
After updating I have a problem with starting MIB unfortunately:

MIB installation path: C:\Program Files\MIB\application
MIB: parameters file: C:\Users\lbreitsprech\Matlab\mib.mat
Reference to non-existent field 'deep'.

Error in mibController/getDefaultParameters (line 355)

Error in mibController (line 539)

Error in mib (line 91)

Error in mib_deploy (line 5)

MATLAB:nonExistentField

Any ideas where I messed up? I tried reinstalling with different paths but with the same results.
I am using the standalone version.

Best regards,

Leo

Hi Leo,
thank you for your kind words!
Try to delete C:\Users\lbreitsprech\Matlab\mib.mat
and restart mib, it will unfortunately remove all preferences, soriy for that.
Ilya

2 Likes

Hi Ilya, great work - very exciting!
I’m doing segmentation analyses in 2D only, quantifying different classes of objects. Previously I have used pixel classifiers but the results are not great. I have a few basic questions

  • Is deepMIB suitable for whole slide images (WSI), 2-4Gb image size?
  • Do I need to install and learn Matlab in order to use DeepMIB?
  • The Watershed/Graphcut segmentation tool is amazing. Is this tool used in combination with DeepMIB?
  • Does DeepMIB support .ndpi format?
    Best
    Pentala

Hi Pentala,
thank you for your nice comments!

Is deepMIB suitable for whole slide images (WSI), 2-4Gb image size?

in general yes, but it requires slight adaptation, as MIB is not yet really updated for work with large rasters. What you will see a slow updates when adding materials to models.
Couple of suggestions to make the process faster:

  • enable the fast panning mode (a button with hand icon with the F letter in the toolbar) for efficient panning the images
  • use the block mode (a button with a square rectangle in the toolbar next to ROI button with the R letter). In this mode, your operations will only be applied to the shown portion of the image, which makes process faster. If you need to apply operations to the full image, just uncheck this button.
  • you may need to disable the undo system (Menu->Preferences->Enable undo history). In general, the undo system is optimized to store only the modified areas, but not everywhere and in some situations you may experience a delay (do to make of backup) before the operation.

Do I need to install and learn Matlab in order to use DeepMIB?

no, it does not require Matlab. The only limitation is that 3D-anisotropic U-net for valid padding is not available. Only the ‘same’ padding can be used in this mode. Since you are interested in 2D mode, it won’t affect you in any way.

The Watershed/Graphcut segmentation tool is amazing. Is this tool used in combination with DeepMIB?

Graphcut is good for making ground truth data for DeepMIB or perform semi-automatic segmentation. In a way, they can be combined as two sequential segmentation tools.

Does DeepMIB support .ndpi format?

I have not used this format myself, but you should be able to read these files using the Bio-formats reader. Please check the ‘Bio’ checkbox in the ‘Directory contents’ panel.

Best regards,
Ilya

Thank you for this excellent and exciting software! I have trained a 2D-segmentation model on an online dataset, and would like to run a prediction on exported tiles from another WSI image viewer (qupath) and then import the resultatant prediction masks back into Qupath from files with the same filetype, name and pixel-dimensions. Is there any way to export the resulting prediction masks as for example *.tif files? I can only see *.mat files in the prediction folder under Results.

Hi @HenrikSP,
Thank you!

the mat-files are only temp-files into which the images are converted for the sake of simplicity. They are generated by pressing the Preprocess button and used for training and prediction.
What you will need are files that are placed under the Results\PredictionImages\ResultsModels and Results\PredictionImages\ResultsScores subdirectories.

ResultsModels -> is a directory with the final model of the prediction in MIB *.model format. You will need to open these models in MIB (load original images + load the generated model) and resave in TIF format (Menu->Model->Save model as…).
ResultsScores -> is a directory with prediction scores that can be used to threshold the materials of interest manually. Those are stored in Amira Mesh format as multicolor images, where each color channel describes scores for each material. Those can also be loaded into MIB and converted to TIF (Menu->File->Save image as).
I do not use TIFs as the standard TIF can only have 1 or 3 color channels and limited to ~3 Gb.
I hope this clarifies the issue!

Ilya

1 Like

Thank you so much! The YouTube tutorial for the 2d segnet was very helpful. Is there a tutorial (text or video) available for the 2D u-net (not segmentation, just classification or whole images)?

Hi @HenrikSP,

The YouTube tutorial for the 2d segnet was very helpful. Is there a tutorial (text or video) available for the 2D u-net (not segmentation, just classification or whole images)?

segnet, do you mean u-net? we do not have tutorials about segnet, as U-net was better in our segmention tests.

just classification or whole images

I was not implementing it as we are mostly interested in segmentation problems. Technically, the implementation is rather straight forward, but I did not have any scientific question where I can use it and as result no testing environment. I will think about this as it makes a nice coding exercise :slight_smile:

Ilya

1 Like

Thanks. My bad, I see you use u-net in the tutorial. I can see that other networks like vgg, inception, etc is probably more suitable for pure image classification tasks without segmentation. I’m not very good at this - I’m a pathologist:) In case u-net is not very suited for pure image classification, maybe including some of those other pure classification networks as an option in future versions of deepmib might be useful and extend the user base.

@HenrikSP, I will think about this.
If you have a specific task in mind, could you send me a personal message with description of the task.

Been trying the software a bit and it looks very good. Missing some settings like the ability to balance classes. Is there any way to implement things like “ClassWheights” (https://se.mathworks.com/help/vision/ref/nnet.cnn.layer.pixelclassificationlayer.html) or other ways to balance classes better in very imbalanced datasets (many histopatology datasets are quite imbalanced)?

Hi @HenrikSP,

Missing some settings like the ability to balance classes.

If you have rare classes please use ‘dicePixelCustomClassificationLayer’ as it contols those cases quite well.

Ilya

Thank you for the reply. Sorry about all the questions, but do you know if Hamamatsu *.ndpi-whole slide imaging files can be read by DeepMIB? I can’t find the ndpi extension when selecting bio (Bioformats), but I know Bioformats supports it (https://docs.openmicroscopy.org/bio-formats/6.1.1/formats/hamamatsu-ndpi.html).

it it is supported you should be able to read. If the format is not listed in the Filter dropdown, you can add it. Right click over the dropdown and choose Register extension:
image
after that you can add ndpi to the list:
image

after that it should also work with DeepMIB.
Ilya

1 Like