Pipeline Optimization for Identifying/Counting Fluorescing Adherent Cells - issues with intensity threshold

cellprofiler
intensity
threshold
cellcounting

#1

Hello,

I am coming here for expertise on pipeline optimization for cell counting, specifically counting fluorescing adherent cells. My cells overlap quite a bit, and this can make the cell counting procedure a bit tricky. I have attached a result of the pipeline I have made, but as you can see, it is forgetting to count quite a few cells in the middle… I believe that this is due to the threshold that is automatically generated, at 0.186.

Here a the set parameters of the IdentifyPrimaryObjects module that I have been using (I have also attached my pipeline if it makes it easier):

  • Typical diameter: 30-180 pixel units, Discard objects outside range YES.
  • Threshold strategy: Adaptive (Should Global be used??)
  • Thresholding method: Otsu (Is this the right choice…?)
  • Two-class or three-class thresholding: Two-classes
  • Minimize the weighted variance or the entropy: Weighted variance
  • Select the smoothing method for thresholding: Automatic (Is it better to use a manual method? If so, how to play around with it…?)
  • Threshold correction factor: 1.0
  • Lower & Upper bounds on threshold: 0 - 1.0
  • Method to distinguish clumped objects: Intensity

Thank you for your help!

Hugo

03
Updated_Counter_export.cppipe (8.0 KB)


#2

Welcome!
You definitely want adaptive thresholding, not global. That is the magic behind doing a good job on regions of the image that are dimmer vs regions that are brighter.

You need the threshold to be a bit lower in this case, so I suggest playing around with the threshold correction factor, setting it to 0.8 and seeing if that is getting better at capturing the dimmer cells (without mistakenly capturing background).

If you are unable to get a value that works well (testing also on other images in the set, not just optimizing to one) then I would suggest giving three class thresholding a try where the middle class is assigned to foreground.
Let us know how it’s going!


#3

Thank you @Anne_Carpenter, this does solve a bit of the issue! How can I find a systematic way of identifying the right correction factor? Also, if I use a different threshold correction on 2 images and compare the number of objects, would that type of comparison be acceptable? Or would it just skew my results?

Cheers,

Hugo


#4

Hi,

Beside using “Adaptive” as Anne suggests, please do try using

  • Thresholding method: Otsu
  • Two-class or three-class thresholding: Three-classes
  • Assign middle intensity : Foreground

This setting is the more systematic way (Adaptive, Otsu, 3-classes) to find the right correction factor, rather than setting any static values.

It’s not a good segmentation comparison if you would use 2 different static corr. factor values in 2 images.
You’d only want to do so when you have very different brightness between the 2 images (e.g. because of illumination errors). But even in that case, it’s better to do illumination corrections and then use the same segmentation settings.

Hope that helps.


#5

Hello @Minh,

Thanks for your help, that does also help quite a bit (as long as I change the lower and upper bounds on threshold)! Is it more important to minimize the weighted variance or the entropy?

I have attached a screenshot of a new result I have, along with the original image if you wish to take a look at it. The screenshot I took shows that despite the threshold being at 0.1, it is also circling regions that are well below the threshold (at 0.05)… Why is that the case? Even though it is successful at identifying objects, it is problematic with my application given that I am looking to get a rough estimate of Average Object Intensity, and this will bring the average way down! There are also quite a bit of objects that are clumped together despite a clear-cut intensity difference, which I find quite strange.

Please let me know if you have any other ideas.

Thanks again,

Hugo



#6

Hi,

Can you check your settings in that module for “Fill holes”? It’s probably set to “After thresholding and declumping”; try changing it to “None” or “After declumping only” and see if that improves your results.


#7

Hello @bcimini,

Both of these solutions give the same results, which are far better than I had before!!

Thank you very much, I’ll try working with that for now but I don’t think it can get any better than this!

Cheers,

Hugo


#8

Hello,

My pipeline is looking better and better each time I come here for help, which is why I’m coming back! I have attached a picture that exemplifies a few oddities, and I don’t know if I should stop being a perfectionist or if there is still room for improvement. Here are two points I am concerned about looking at this image:

  1. The two rounded cells in the top left quarter (and even the two cells in the upper right corner) looks to me like they have a very sharp boundary, but the declumping systematically fails to identify them as distinct objects. If I play around with smoothing filters and local maxima, I can get it to separate, but then some random unforeseen segmentation occurs in what seems to be a homogeneous object appear…

  2. On another note, I have set the thresholding minimum at 0.1 but when you look at the objects, the outlines are encapsulating regions at the edges that are well below that minimum. Is it possible to achieve higher resolution (boundary at the edge of a cell), or is this the best I’m looking at? Such results may skew fluorescent intensities of individual objects.

Thank you for your help, I feel like the optimization is nearing the end!

Hugo


#9

Sorry, here is the image. Also, I feel like using the module would be quite useful, but the resulting image for downstream processing only keeps the edges and gives weird results…


#10

My pipeline is looking better and better each time I come here for help, which is why I’m coming back! I have attached a picture that exemplifies a few oddities, and I don’t know if I should stop being a perfectionist or if there is still room for improvement. Here are two points I am concerned about looking at this image:
The two rounded cells in the top left quarter (and even the two cells in the upper right corner) looks to me like they have a very sharp boundary, but the declumping systematically fails to identify them as distinct objects. If I play around with smoothing filters and local maxima, I can get it to separate, but then some random unforeseen segmentation occurs in what seems to be a homogeneous object appear…

I agree they look distinct, but no segmentation is going to be perfect- those cells seem to be much smaller in area than the others, which is also going to lead to issues. My rule of thumb for optimizing is to tweak the settings until I have roughly as many objects split wrongly in half as lumped wrongly together- at that point, you just have to declare good enough.


On another note, I have set the thresholding minimum at 0.1 but when you look at the objects, the outlines are encapsulating regions at the edges that are well below that minimum. Is it possible to achieve higher resolution (boundary at the edge of a cell), or is this the best I’m looking at? Such results may skew fluorescent intensities of individual objects.

The resolution should be pixel thin, so either a) there’s still a problem with your “Fill holes” settings (do they go away when it’s set to “None”?) b) those pixels ARE above 0.1 (some of them I notice are next to some pretty bright cells, so the halo-ing that naturally happens in that case could lead them to be- try hovering over some of the pixels you disagree with it calling, what is the brightness reported at the bottom of the window?) or c) it’s a bug. Check a) and b) first, and if it’s neither and you suspect it may be c then upload your current pipeline and a sample image that’s displaying the behavior and we’ll check it out for you.


#11

I think there might be a misunderstanding - I think the second part of her question is saying that the boundaries around some of the cells are too lenient? I’d suggest using adaptive thresholding.


#12

Hello,

Thanks again for all your help. Attached is my pipeline and a few images, maybe you will be able to better understand what I’m doing wrong… The 2 images are of lung cancer cells stably expressing GFP after Hoechst staining, therefore one image under blue excitation (control_4) and the other under UV excitation (control_6). Theoretically, and as confirmed by overlaying both images (control_4_overlap), all cells should be GFP+. However, when I count the GFP objects, I get 810 versus 1057 stained nuclei. Thats already pretty good, but it means that I’m not neglecting 1 out of 4 cells…

When assessing knock-out in other images, it gets tricky given that I don’t know how much of the difference in count is to be attributed to counting discrepancies or actual KO.

Let me know if you have any suggestions! If not, I may have to stick with manual counting…

Cheers,

Hugo

Update_14_12.cppipe (7.0 KB)


#13

(just in case the files were too heavy)


#14

First of all, I definitely do see some cells that don’t seem anyway to have strong GFP signal- such as this one. Stable lines or no, even if you FACS sort for only positive expressors in my experience they do tend to “spread out” their intensity profile which means some will be very dim and some very bright.

image


Secondly, given the difference in intensity it’s already not surprising you’re getting different counts when you segment on the GFP channel based on the DAPI; but since your DAPI is better-separated, I’d also expect that you get a more accurate count (since you’ll get less clumping) when you identify based on DAPI than GFP.


Finally, if the eventual goal of this pipeline is to count % of GFP positive and test what percent are after a knockout, 100% for sure you want to identify cells based on DAPI and then filter for whether they’re GFP positive of not (for all the reasons you list)- I’ve modified your pipeline to do something like that, where if a cell at least 75% overlaps with the GFP positive area it’s called a GFP-positive-cell. That 75% cutoff is totally arbitrary, you can tune it based on what you think looks best. Take a look and see what you think. FWIW, based on this totally untuned approach I get 89% GFP positive, which by eye looks roughly right to me.

Update_14_12_BCEdits.cppipe (10.7 KB)


#15

Hello @bcimini,

This is truly fantastic! After playing around with the thresholds and the fraction of the object that must overlap, I am getting very accurate results - in controls, there is only 2% of the cells that do not appear GFP-positive. This allows me to get the KO efficiency from images in just 1 click! I would like to include the code/script in my thesis, is there any way to export this?

Thank you very much, I am extremely grateful for all your help.

Hugo


#16

Hi Hugo,

I’m so glad to hear it’s working for you! The .cppipe file is just a text file (try opening it in notepad or any text editor), so once you finalize yours you can certainly include it in your thesis (or as supplemental data in any papers!). If it’s not too much to ask, a citation is always nice too. :slight_smile:

Good luck!


#17

I will gladly cite CellProfiler whenever I come to a publication!

Cheers,

Hugo