Okay then I guess it should be possible in ilastik using the following two-step approach:
1 Pixel Classification
You could start by using the Pixel Classification workflow for Background/Foreground segmentation. Foreground would be the cells you want to count, background anything else. Most important here is that the prediction includes all chondrocytes. Tip: your image is rather large - don’t zoom out too much while in live update mode, otherwise it will get slow. If you trained a good classifier, then it’s time to export the result (probabilities, preferably as an
h5 file) and proceed to the second step.
2 Object Classification
You can “clean up” your segmentation in this step and also get the result you are going for, a table from which you can derive your counts.
As input data you use the original data, as well as the exported probabilities from step one.
In the thresholding step you can try something like
0.5 and see whether it picks up all the chondrocytes (as individual objects).
As object features you can try to use “all excl. location”.
In training I would leave it at two classes, one for chondrocytes, one for everything else that might have been picked up. Train the algorithm by giving examples.
In the end you can export a segmentation image, but for you maybe more importantly, you can export a table (
csv can be read in excel) that holds the predicted label for each object, and also the features that you can use to further characterize your objects.
Please let me know if this is enough to get you started!
- in case you are wondering whether you could use the “combined” workflow, I’m afraid that it is very heavy on the memory - we do not recommend using it for larger datasets
- the documentation is a bit out of date and we are reworking it