ZeroCostDL4Mic allows the use of popular Deep Learning neural networks capable of carrying out tasks such as image segmentation and object detection (using U-Net, StarDist and YOLOv2), image denoising and restoration (using CARE and, Noise2Void), super-resolution microscopy (using Deep-STORM) and image-to-image translations (using Label-free prediction fnet, pix2pix and CycleGAN). With ZeroCostDL4Mic, researchers with little or no coding expertise are able to train (and re-train), validate, and use DL networks, though a browser and for free, thanks to the Google Collab engine the platform uses.
Some helpful videos:
- Video abstract for the release:
- Examples of the type of processing you can do
- Tutorial explaining how it works
- Romain Laine’s talk describing ZeroCostDL4Mic:
Latest Update In brief:
- We released five new notebooks that train 3D U-Net, YOLOv2, Deep-STORM, CycleGAN and pix2pix.
- Release of associated training datasets on Zenodo
- Updated Preprint and Manual
- We illustrate our notebooks’ applicability by providing examples of results that can be generated using our notebooks (6 new figures + 9 new videos).
- We illustrate the power of performing transfer learning (See SI of our preprint).
- We illustrate the usefulness of data augmentation (See SI of our preprint).
If you have any questions or comments, do not hesitate to get in touch with us! We regularly provide updates on twitter about the developments we are implementing, to check them out follow #ZeroCostDL4Mic on Twitter.
We hope you find ZeroCostDL4Mic useful for your work (or your hobbies!).
The ZeroCostDL4Mic team