Fluorescence intensity quantification

So I’ve read almost all forums I can find on here about measuring fluorescence intensity to try and find the best option…I am trying to compare intensity in different conditions and then compare the intensity of multiple cells within a condition

I’m just wondering if I’m doing the same thing twice (subtracting too much background maybe), here’s my suggested workflow:

  1. SUM project 40x zstack
  2. Subtract the background with a rolling ball radius/sliding paraboloid
  3. Select ROI of cell and take measurement (Mean gray value, Area, integrated density)
  4. Select an ROI of background near the cell according to this method to get the corrected total cell fluorescence (CTCF).

I’m just wondering if step 2 and 4 are doing the same thing, and whether I should do both or just choose one.

Thanks.

The rolling ball algorithm changes the intensity data, so you should be careful and consider whether it is a good idea to change the data you are intending to measure.
That type of “correction” is perhaps useful for segmentation, but I would avoid it for intensity measurements.

1 Like

Do you think it matters if I’m looking at relative intensity, so for example Cell 1 is more intensely stained than Cell 2… in my tests, I’ve made column graphs of the different cells, and the landscape doesn’t change between methods, it’s just the absolute measurement of the intensity that changes…

The rolling ball correction is not based on the illumination source alone, but on the image data, so the result depends on the image, not on a model of illumination.
For example, try

run("AuPbSn 40 (56K)");
run("Subtract Background...", "rolling=50 light sliding");

Do you think that the large blobs have the right intensity?

Hi, As Gabriel is suggesting the ‘rolling ball’ method of compensating for uneven illumination isn’t recommended if the intent is to measure integrated intensity and compare within an image or across images. A better way is to create ‘background’ images and use them to correct for uneven illumination at the time the image is taken. That way all images are corrected the same way for uneven illumination coming from the instrument.

here’s some links that talk about how to do this: http://nic.ucsf.edu/blog/2014/04/shading-correction-for-different-objectives-and-channels/

which also leads to this:
http://nic.ucsf.edu/dokuwiki/doku.php?id=flatfieldimageacquisition

As less ideal way is to generate a background image from your sample image. There are multiple ways to do this. Less desirable than this approach would be rolling ball correction.

To answer your original question yes, step 4 is doing step 2 again in a similar, but not exactly same way.

1 Like