CP not registering weak actin fibres for secondary object identification

Hello. I am analysing some NIH3T3 cellline mutants with a really weak actin fibre (phalloidin/red colouring) phenotype compared to my other mutants. I usually use the actin fibre as my way of identifying the secondary objects (i.e. the cells) but because the actin is so weakly expressed in my cells, CP has a hard time identifying them. I have tried playing around with the enhance/suppress-features module but it hasn’t really helped. I really don’t want to lose these cells, it would be a real shame since my pipeline otherwise has been working adequately.

I have added my pipeline and some example images Pipeline JNO 2.5 new names.cpproj (1.3 MB) NIH3T3 D1 RhoA G62E 01-Image Export-25_b0c0x0-1388y0-1040.tif (4.1 MB) NIH3T3 D1 RhoA G62E 01-Image Export-25_b0c0-3x0-1388y0-1040.tif (4.1 MB) NIH3T3 D1 RhoA G62E 01-Image Export-25_b0c1x0-1388y0-1040.tif (4.1 MB) NIH3T3 D1 RhoA G62E 01-Image Export-25_b0c2x0-1388y0-1040.tif (4.1 MB)

I would add more images but I can’t because I’m a new user?

Thank you so much, I am still quite a CP noob.

Hi @EllenAppel,

Great work with your CP pipeline. Here’s a couple of ideas for detecting weak background signal for you to try:

  • Otsu thresholding using three classes will probably be helpful to you since you have a wide spectrum of intensities that make up your secondary object. I recommend the CellProfiler video tutorial here on the COBA YouTube page for more explanation of the various thresholding methods and why you might choose Otsu 3-class thresholding: https://www.youtube.com/watch?v=eriZdORpFxs&t=1s
  • At least in your example image, several cells were lost that touched the border of your image. If this is most appropriate for your downstream analysis, then you’ll want to keep that setting, but I wanted to make sure that you’re aware.
  • Sometimes using the ImageMath module to take the square root of an image can convert weak signals into more uniform across the cytoplasm. For the operation setting, you would select “None”, then “Raise the power of the result by” 0.5.

I hope some of these are helpful. Cheers,
Pearl

Thank you so much Pearl!
I agree, Otsu is the best thresholding method for my data.

Unfortunatly, I can’t use the cells that touch the border in my analysis.

The ImageMath tip seems really useful, they definitely brighten my cells up - I will use this in my pipeline from now on!

Unfortunately I am still having some problems, especially in regards to seperating my cells. An example would be this image set from the same experiment:
NIH3T3 D1 RhoA G62E 04-Image Export-28_b0c0-3x0-1388y0-1040.tif (4.1 MB) NIH3T3 D1 RhoA G62E 04-Image Export-28_b0c0x0-1388y0-1040.tif (4.1 MB) NIH3T3 D1 RhoA G62E 04-Image Export-28_b0c1x0-1388y0-1040.tif (4.1 MB) NIH3T3 D1 RhoA G62E 04-Image Export-28_b0c2x0-1388y0-1040.tif (4.1 MB)

Even with the much brigther staining, thanks to ImageMath, it seperates the cells at the weirdest (and clearly wrong) locations as well as not locating all of the cells. Any more tips? :pray:

Pipeline JNO 2.5 new names after help.cpproj (973.3 KB)

For understanding how secondary objects are split, I encourage you to try to “think like CellProfiler”. The help section for the “Select the method to identify secondary objects” explains the different options to draw dividing lines between secondary objects. Most of these methods rely on intensity changes (either increased or decreased intensity) at the dividing line between secondary objects.

If you use your mouse to hover over the intensities at cell borders for your input image into the IdentifySecondaryObjects module, you’ll see that your intensities are quite similar on either side of a cell-cell border. For this reason, it’s difficult for CellProfiler to recognize what seems like an obvious cell-cell border. Instead of relying on intensity, your eyes are primarily using texture to differentiate between cells.

In order to improve secondary object detection, you can try to create an image that will have either increased or decreased intensity at cell-cell borders. For example, you could:

  • Try different “Methods to identify the secondary objects” to see if they work better for your images. The help section explains what each method is optimized for
  • Use EnhanceOrSuppressFeatures to enhance texture and create a new image that has highest intensity at the edges of your cells (where texture is greatest)
  • Use the ImageMath module to then add or subtract this image from the image of your cells - anything to increase the intensity contrast at your edges. You can adjust the weights of each image that you’re adding / subtracting
  • Feed the output of that ImageMath module into IdentifySecondaryObjects

As a general principle, anything that you can do to either enhance or decrease the intensity at borders of your cells should help to accurately segment the cells.