New YAPiC Release: A Simple Command Line Tool For Deep Learning Based Image Segmentation

New YAPiC Release (1.2): A Simple Command Line Tool For Deep Learning Based Image Segmentation

We are happy to announce new YAPiC version 1.2! Check out the YAPiC website for tutorials and installation instructions.

With YAPiC you can train your own customized neural network (u-net) to detect a certain structure of your choice with a simple python based command line interface. You can label your training data with Ilastik.

$ yapic train unet_2d "path/to/my/images/*.tif" path/to/my/ilastik_labels.ilp

$ yapic predict my_trained_model.h5 path/to/my/images path/to/results/

  • Installation with pip install yapic
  • Support of sparse labels, training data can be imported from Ilastik projects.
  • NEW: Trainied models can be used in ImageJ/Fiji
  • To quickly get started check out the new tutorials at the YAPiC website and train your first model.

Release Notes

Trained classifiers can be applied in ImageJ, using DeepImageJ Plugin

With the new release we aim to build a direct brigde to the ImageJ/Fiji ecosystem. Models trained with YAPiC can now be simply converted into DeepImageJ bundled models with the new deploy command:

$ yapic deploy my_trained_model.h5 path/to/example_image.tif path/to/

All necessary metadata, example images and conversion scripts included in the bundled model and can be directly opened in DeepImageJ and are ready to use.

Tensorflow and Keras dependencies were removed

In previous YAPiC versions, Tensorflow version 2.1. was automatically installed. We removed the Tensorflow dependecy to allow the user to install the Tensorflow version that fits best to the specifications of their GPU hardware and CUDA driver setup. Keras functions are now imported from the Tensorflow backend, preventing potential version conflicts between Keras and Tensorflow.

All release notes here.

Happy model training!

All the best


Thanks for the update!

Does Yapic work with the manually labelized sparse data directly or does it run iLastik in the background to generated the segmentation from the sparse label and then trains the U-Net on the not-sparse labels (generated by iLastik)?

Also, can it work on 3D data?


Dear @Jules

Yapic just reads the sparse labels from the Ilastik file. Ilstik does not run in the background, The U-Net is trained with sparse labels. To get good training results, the loss function is adapted for working with sparse labels. Moreover, the logic for fetching randomized training data makes sure that all types labels are present in each training iteration.

With these adjustments we obtain robust training with sparsely labeled data. From our experience, single scientists can collect training data within a few hours and train a reasoable pixel classifier (of course the amount of labels is very dependent on the classification task)

Yapic works on multichannel 3D data. It reads hyperstacks generated with ImageJ. Yapic does NOT support time series data.

1 Like

Dear Christoph,

Thank you for your answer! How does inference time compare on 3D data with iLastik? I wonder how much time it would require to process a 1000x1000x1000 volume on a RTX 3090, did you do some benchmark tests?

Thanks for your help

We did not really benchmark tests, but from my experience I guess a 1000x1000x1000 volume will be predicted in ca. 5 minutes. We use Nvidia Titan V GPU.