Setting up Ilastik to segment tiny, clustered objects

Greetings good people!

I am trying to segment pictures with very tiny, circular features. The problem is pretty much the same as in my older thread (this one, meaning that the pictures are noisy and I can’t afford to use any kind of gaussian or median filtering because they would “mix” very close circles together. Some circles have a diameter of a few pixels, even.

I’ll post here an example of the pictures I’m trying to work with (for reference, they are 800x800 px wide):

However, since last time I managed to make some progress on the detection side, meaning that now the problem is just ("“just”") segmenting the pictures properly.

I have been having quite some success with Ilastik and its pixel-classification mode, but I still need to fine-tune the process, as the results are still not entirely usable, albeit very good.
To be more clear, this is the result I get after training on five similar, 2D pictures on the example above:

As you may see, Ilastik struggles to separate circles that are very close one to each other.

My question is: what can I do to optimize the process? For very small and clustered circles are there some training features that I should turn off or on? Should I limit the features to smaller sigmas? Is there anything in particular that I should take care of during the training?

Thank you in advance!

Sounds like your images are a good candidate for deep learning segmentation.
Did you try Weka segmentation plugin in Fiji.

have you tried stardist (link to stardist video)? It might work right out of the box for you and can cope with overlapping objects.

1 Like

I did! But Ilastik gave me better results on my data (maybe because Ilastik’s tools are easier to use on my pictures)

I can give it a look, but I’d rather not change software, especially because I built my workflow around it and I invested quite some time on training

A little update here! I tried stardist, but I didn’t really manage to make it work. Some of my pictures have regions that are much darker than the rest, and stardist either keeps recognizing the droplets in the darker areas or the ones in the brighter ones. To keep it short, I had to bench it.

However, I insisted a bit with ilastik and obtained decent results that still need refining.

This is an example of the pictures I’m working with:

I trained ilastik with all parameters (edge, intensity and edge) up to sigma=3.5 and with pictures like that (why I did that it’s a pretty long story)

Extremely small and isolated droplets. The results on the first picture, as I said earlier, are pretty good:

While it’s decent, it has several issues: the bigger droplets are segmented improperly, as they either have holes or abstract shapes. However, if I try to add them manually to the training, ilastik starts to “glue” together the smaller droplets and it erases the less intense ones.

What can I do? I know next to nothing about machine learning, and therefore I don’t know what parameters to exclude form the training, how to train ilastik or how to pick the training dataset.

Hello @tonytani,

I’m interested :slight_smile:
In general it should be safe to use all features. With your training data the random forest will figure out which ones to use.

How to go on depends a bit on the ultimate goal of your analysis…

What shape do you expect the droplets to have? If they are circlular you could possibly use ilastik pixel classification as an elaborate way to equalize appearance of the image and then maybe use something like a hough transform to find the circles (have not used the plugin, and there is always the difficulty that you don’t know how may circles you are looking for).

Another approach would be to accept that some of the droplets that are very close to each other might be merged - given that these can be clearly identified automatically. For this you could e.g. use Object Classification in ilastik starting with the probability map from pixel classification. You could have classes there like false detection, single object, two objects… and so on. In the end you’d get a table with a row describing each of your objects.


Hello, and sorry for the late answer

I have done some (heuristic and sparse) testing and it seems that the features do have some effect on the actual segmentation, but nothing dramatic.

Eventually, I need to find the number and the volume of droplets. From Ilastik I need a properly segmented B/W image so I can feed it to my analysis script. At this point, I’d be happy with it simply not losing droplets even if this comes at the “cost” of merging together some smaller and more clustered blobs.

Anyways, at this point I found a way to make Ilastik segment properly pictures: feeding it many pictures (15-20). This way the software managed to count droplets properly and not to join together clusters.

Now, I have two problems: the segmentation is too dependent on the intensity of the pixels and this is a major issue for my data because there are frequent intensity fluctuations that I do not really know how to take care of.
Ideally, the result of the training should be totaly independent on the absolute intensity of the pixels, but just on the contrast with the background.
To put it simply, if I give Ilastik a certain image input (A) and it gives me a certain segmentation (B), I’d like to have the same output (B) even if I use as input A but with its intensity divided by (say) 2.

I know that this is most likely impossible, but is there anything that I can do to make ilastik more robust with respect to intensity fluctuations?

Also, when I put that many images in a training Ilastik becomes awfully slow and takes a lot to even “load” the live training mode. Is there anything I can do? Can I give it crops of images instead of whole ones?

Thank you!