Size measurement of objects with different intensities

Hello,
I would like to measure size of particles. Problem is that the fluorescence intensity is varying quite a lot among them. I can easily identify and count the number of particles but if the threshold is applied the particles with low intensity are much more smaller than they should be (see img1 and 2). Optimally if I would do it “by hand” I would do the line selection and get a plot profile of intensities through the particle (img3) and then measure the diameter as a pixel distance at the same level (let say one third between background line and the top of the peak).
Is there any way how to do something like this in CP?
I was thinking about pipeline with 1) identify the objects 2) slightly enlarge the object by some factor 3) crop the objects and use per object thresholding on the original image and measure the new objects…
But this seems to me quite complicated way… and I also do not know if this is right - does any of the threshold methods do something similar as I wrote about the plot profile of intensities? Problem with this object enlarging and per object thresholding also could be that there could be different amount - proportion of the black (background) pixels…

Thanks for advice…

John T.





Hi John,

It seems that you can make a CellProfiler pipeline that has the following modules:

  • LoadImages to load each image.
  • ColorToGray to split the color image into the red, green and blue channels. Since it looks like the infotrmation is contained in the green channel, then you just need to keep that one.
  • IdentifyPrimaryObjects to identify the objects from the green channel. Using the image img1.jpg you posted, setting the typical object diameter to 150 as the lower limit and 400 as the upper limit, and setting the threshold method as “Otsu Global” with three class thresholding and the middle intensity to the background, seems to give a good result.

If you want to adjust the tightness/looseness of the identified object boundary, you can either change the thresholding method and/or adjust the threshold correction factor. This has the effect of contracting/expanding the object diameter based on the intensity levels in the image. Also, these methods attempt to automatically find the threshold independently of the amount of foreground/background pixels.

Hope this helps!
-Mark

Hi John,

I think a gamma correction could help you, because it would bring the low intensity values up, without modifying so much the high intensity values.

[quote]

a = [0 10 20 100 200 250];
a = uint8(a);
b = imadjust(a,[0 1],[0 1], .5)

b =

0   50   71  160  226  252[/quote]

This matlab example uses gamma factor .5 on a 8 bit image with 6 pixels, but it would work the same on your image. Just use RunImageJ module, and from there you can use the gamma factor in ImageJ.

Alternatively you could try using a Lookup table and reindexing your low values to high values.

The only problem with this approach is you can end up with your background noise very bright, so first subtract a smoothened image or use the modules to correct Illumination. If you have background pixels that are brighter than the low intensity boundaries of cells, you really have to watch out for this.

Hope it helps!

Roberto

Hi,
thank you both for your fast answer. The gamma correction is a good idea, it can help a little but as you wrote, background is a problem here, I am trying to figure it somehow.
I did not post original image before, but only zoomed part of it… now I am posting the original tif.
I still think that global thresholding would give false results of the size and/or number of objects… I tried the three way Otsu, as you can see in the attached jpg files, if I use more lenient threshold I will identify all the object but the bright ones are enlarged or clumped. With more stringent threshold, the bright objects are good but some of the weak spots are skipped or they are very small (if I set min diameter to 1 or 2, they wont be skipped but I will get plenty of objects of 1 to 3 pixel size which will affect the average size of the particles).

Thanks again for any other suggestions…

J.T.





I dont know why, but I cant see the tif file I have posted. I am posting it again as a zip file…
cell1_ch00.zip (373 KB)

Hi J.T.,

I’m posting a pipeline that should get you part of the way to a solution. The primary considerations in the construction of this pipeline were these:

  • Using EnhanceOrSupress with Speckles as the feature and a filter size set to the approx width of the largest object you expect to encounter. This is intended to remove spatial intensity heterogeneities larger than the filter size but preserve features smaller than the filter size. I didn’t know whether your image actually had such heterogeneities but I didn’t think it hurt to be proactive.
  • Using Laplacian of Gaussian as the declumping method. This is essentially a blob detection method which often works well for objects that are characterized by a single intensity peak but also variable in intensity across the image. The downside is that the settings can be tricky to set, so I’ve done the best I can in this case.

The pipeline is not perfect, but should get you pointed in the right direction.

Regards,
-Mark
2012_06_07.cp (3.69 KB)