Quantify fluorescence intensity to analysis mitochondria activity

Hi,

I am learning how to use Image J to calculate fluorescence intensity. I saw the pictures of staining oocytes which show 2 colours (red and green), but I am not sure how they quantify the red and green fluorescence intensity from these pictures.

I guess they did following below steps:

  1. In each red or green fluorescence image, they chose 1 oocyte to be ROI
  2. Then set functions: Analyze -> Set Measurements -> chose “Area, SD, Min and Max gray value, Integrated intensity, Mean gray value”
  3. And then chose “Analyze -> Measure” to calculate red or green fluorescence intensity. After that, a result table appeared, and the red or green fluorescence intensity was “Integrated intensity”.

Could anybody please check for me:

  • Are these steps right or wrong?
  • Is the red or green fluorescence intensity affected by area/size of oocytes? In case I have several oocytes in one image (some oocytes are smaller than others), and I want to calculate the red or green fluorescence intensity for each oocyte, do I need to choose an identical area for oocytes? And after having fluorescence intensity for each oocyte, can I calculate the mean of red or green fluorescence intensities from it?
  • Do I need to care about the Mean fluorescence of background?
  • What is “Raw integrated intensity”?

I am so confused, so I really appreciate any help from you.
Thank you in advance!

@Phuong

Sorry for the delay in response… I’m not sure I’m the best one to help here. But I can tell you these things:

That is more-or-less correct. If you want more information Segmentation, just use the following links:

I would think that normalizing your signal measurements for area would be ideal here.

As long as you acquired all images using the same system and acquisition settings, the background should be consistent from image-to-image. If you are worried about this… you could collect that information to be sure.

It is defined for ImageJ as “the sum of the values of the pixels in the image or selection … equivalent to the product of Area and Mean Gray Value” in the ImageJ User Guide, in the Set Measurements section.

I hope this at least helps a bit.

eta :slight_smile:

Hi Ellen,
I have a quick question here –

could you explain how the normalization is done? is it the same as background subtraction?

thank you!

@arlandan

I’m currently on maternity leave… so not really responding to these at the moment. You can post on the forum asking folks for assistance with your analysis. I’ll be back here in a few months.

1 Like

I think in this case it would be mean or median intensity for the area. The Measure command or Results table after Analyze Particles should have those as options. You can choose them in Set Measurements.

thank you for the response and good luck!

Thank you! I guess my assumption was not correct when I was asking the previous question. What I have been trying to figure out is that — when fluorescent intensity is compared across images, is normalization a must have step to allow such comparison?
What exactly the normalization is and how it is done? My images were taken with exact same settings (exposure, contrast, magnification etc…). I am wondering if normalization actually refers to background subtraction in this situation?

Best!

Normalization for area simply means the, well, mean or median. In other cases, normalization can be attempted to correct for different instruments being used, different operators, or, in general, different “batches.” That is batch normalization.
*Sorry, that specific link was to batch normalization for deep learning, it should have been to an example of normalization for batches… which is something different.

Most common cases seem to be from scRNA-seq, like this one, but normalizing for batch effects occurs across most types of data collection.

How well it works depends on the causes or extent of the batch variation - if half of your images are “normal” and half are saturated, it will not correct saturated images because that information was never there in the first place. In other words, it is not a catch-all for any kind of variation.

Other kinds of normalization are possible, but generally not a good idea. You might imagine what would happen to your negative control if you normalized it along with all of your positive samples.

1 Like

Thanks for the quick response! Sounds like the normalization I am trying to do is a kind of batch normalization. I am not sure if my images are normal/saturated, though. therefore a batch normalization looks unnecessary for me if I understood correctly.

Best!

1 Like