Identify nucleolus in DIC/brightfield using deep learning

Dear all,


I would like to automatically detect nucleoli in DIC/brightfield images of zygotes. I tried some traditional image processing pipelines but failed so I would like to use deep-learning. In principle I do not need an exact segmentation of the nucleoli and it is enough to identify the centroids.

Sample image

The object I am interested are in magenta.


My first try will be stardist. The reason is that the objects are clearly convex and @oburri made a nice tutorial on how to install it. So I am all set up to run stardist on Windows Virtualmachines.

However, as mentioned in the forum stardist may be an overkill to just estimate centroids. Furthermore, clicking on centroids is much faster than painting larger areas.


Does anyone of you (e.g. @uschmidt83, @haesleinhuepf, @fjug, @oburri ) have a suggestion for deep-learning workflows to identify centroids/points in an image? This should be easier then segmentation and with a much faster training.

YOLO is very successful at finding bounding boxes and is extremely fast, but I have no experience per se. Found this repository which makes it look easy

That being said, for your needs, StarDist will be much easier to get started with, and give very good results, and be very fast. From experience it is roughly 2h training on a GTX1080 Ti, which is nothing, and then inference is very fast afterwards.
We did something similar with a user, which worked out quite well, with slightly more complex images.

One issue I foresee, regardless of the algorithm you use, is that if you work with DIC images, you are going to have to be very strict in how you set up your microscope. A change in the position of the prisms or a misaligned lightpath might make your trained model fail on subsequent data.
To mitigate this, you probably should take a bunch of samples (maybe 30?), and each sample at different DIC configurations.


Your image from the Neubias talk was also the reason why I am trying with Stardist. The data you have is clearer than what I have. I hope that I manage to get something useful. In the worse case it is all manual :frowning:

1 Like

I would argue that your data is extremely clear as well.

Basically if a human can do it (identify nucleoli) in less than 1s, then a deep network can do it.

1 Like

Yes Stardist works very fast and I got already some results. Unfortunately, using ~ 100 images and 5-10 objects per image I can only achieve 50% true positive. I will try to add more images, but it could well be that the data is not very clear. Persanally I have difficulties to decide whether a blob is a nucleous or not. So I guess machine learning can’t do better.
My strategy is that we have additional “experts” annotating the data. In the worse case I have a set of labelled data which is enough for the current project.

On a different note: I tried to install the TF2.1 version using your yuml script. Somehow I got a lot of errors. I wrote a comment on c4science. It could well be that it has something to do with my configuration, so I am checking if the CUDA versions etc. all fits together.

Thanks for your help!

I can imagine that these images are small. would you be interested in sharing them and I try to train it on my side? If it works, I will be happy to share the notebooks I used.



EDIT: Better yet, as I seem to see they are in QuPath, you could share your annotations as a QuPath Project and send me the data?

1 Like

Dear Olivier,
thanks for your help. Overall it seems to work but I have a lot of false negative. After looking more into the data it is pretty clear why I think that the training set is still too small. I have problems to identify myself the spots.
I am choosing better time points and images and repeating the analysis. After the annotation done I can share the notebooks and see if one can tweak the system to achieve a better result.