Segmentation for Autofluorescence


I am posting here for the first time. I work with cellular autofluorescence images. I find these images quite difficult to be separated at a single cell level (using thresholding and other tools). Currently, I use a combination of ImageJ wand tool, selection tool, and threshold-analysis to generate ROIs for each cell. Is there a better tool or method that I can use to separate these cells out without extreme user-level mediation?


I have put more example images in this GitHub link below. If I get to solve this, I would add a solution in this repo.


Hi Jenu,

you could use a program like Weka or Ilastik. They work according to the same principle: you take images and manually define “classes” (e.g. cell and background). The program learns what is cell and background from your definitions plus feedback. You can then apply the classifier to new images the program has not seen. Weka comes with Fiji.

Good luck!


You could also try the various thresholding methods in the IdentifyPrimaryObjects module of CellProfiler. That would have the advantage of not requiring the interactive machine learning training the two aforementioned tools require, so it would be a simpler solution and hopefully less adjustment between experimental batches. But thresholding methods are also less powerful, so it’s a trade off!


Hi Terhai, thank you for the suggestion. I use weka for my primary cell/background separation. (i have tried illastik too in the past). The single cell separation process after that stage is tediously manual. With known a known cell area average, I do a lazy spatial averaging to avoid this step of single cell segmentation. I am currently looking into U-Net based classifier, but I haven’t travelled far this route.

Hi AnneCarpenter, I had used CellProfiler based identifier pipelines couple of years back (at a lower magnification) and it works well. But as you mentioned, I have found most threshold tools to reach to a point where i have to manually intervene and draw the zero weight pixel to separate objects. Current pipeline I use in imageJ (Segment nuclei->Threshold+wand->addToROIManager) uses very similar strategy to CellProfiler(rescale-> enhanceEdges->identifyObjectsManually). I hope the advancements in Unet based single cell segmentation would pave the way in near future to separate the cells easier.

1 Like

Ah, I see. This is all very interesting. If ilastik-style machine learning was working well for you to do foreground-background, I wonder if following it with CellProfiler might do the trick, as described here:

1 Like

Thank you for the reply.
Yes. I use the IdentifyObjects functions in CellProfiler. It works like magic in low magnification images. The problem in high magnification image is that the mitochondria-like objects are far separated and create fake boundaries. The different combinations of CellProfiler ObjectDetection (I had used) was not able to identify the boundaries. I see more false positive objects. I know this step is subjective to my selection of image processing steps before the primary and secondary identification steps. However, In many cases I was able to solve using nucleus segmenting as primary and using a strong smooth filtering on mitochondria. But with the advent of so many new deep learning semantic segmentation models, this may be available as a trained model to use.

1 Like