ZeroCostDL4Mic: an open platform to use Deep-Learning in Microscopy

Hi everyone,

Some exciting news on our side. We are pleased to announce that we released a major update of our ZeroCostDL4Mic platform. This release is accompanied by an update of our bioRxiv preprint.

ZeroCostDL4Mic allows the use of popular Deep Learning neural networks capable of carrying out tasks such as image segmentation and object detection (using U-Net, StarDist and YOLOv2), image denoising and restoration (using CARE and, Noise2Void), super-resolution microscopy (using Deep-STORM) and image-to-image translations (using Label-free prediction fnet, pix2pix and CycleGAN). With ZeroCostDL4Mic, researchers with little or no coding expertise are able to train (and re-train), validate, and use DL networks, though a browser and for free, thanks to the Google Collab engine the platform uses.

Some helpful videos:

Latest Update In brief:

  • We released five new notebooks that train 3D U-Net, YOLOv2, Deep-STORM, CycleGAN and pix2pix.
  • Release of associated training datasets on Zenodo
  • Updated Preprint and Manual
  • We illustrate our notebooks’ applicability by providing examples of results that can be generated using our notebooks (6 new figures + 9 new videos).
  • We illustrate the power of performing transfer learning (See SI of our preprint).
  • We illustrate the usefulness of data augmentation (See SI of our preprint).

If you have any questions or comments, do not hesitate to get in touch with us! We regularly provide updates on twitter about the developments we are implementing, to check them out follow #ZeroCostDL4Mic on Twitter.

We hope you find ZeroCostDL4Mic useful for your work (or your hobbies!).

The ZeroCostDL4Mic team

20 Likes

Hi,

I’m trying to use pix2pix, although not running in colab but in a personal server with a Tesla P100 16Gb GPU.
I converted your colab notebook to a .py file and I’m testing with the Actin filaments dataset you provided.
My doubt is, is it normal being so slow (almost 3 min per epoch)? Even with gpu processing?

Thank you

Mafalda

Hi Mafalda,
Yes the training can take some time. The pix2pix dataset we provide contains thousand images. If you have access to multiple GPU you can speedup the process by using more than one (you will need to modify the code of our notebook a bit and increase the batch size https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/300e84a78e77e22f08668180c65949971386175b/docs/tips.md). In Google Colab, training 200 epochs took a bit over than 8 hours.
Cheers
Guillaume

Hi @Guillaume_Jacquemet

thank you for the tip, it’s already running on both GPUs and a lot faster. Now I need to limit the percentual usage of the GPU… :slight_smile:

Cheers,

Mafalda

1 Like