Normalize pixel intensities across seveal images

Hi,
I’m new to image analysis and I have a problem that I don’t really know how to solve.
I need to quantify rna-protein granules on a large series (>200 images) of confocal images, comparing between wild-type and mutant conditions. My “simple” goal is to see if the wild-type condition has more or less granules than the mutant.

The images have a lot of background, but I can sort of deal with this already by applying some pre-processing steps.

The main issue is that the range of pixel intensities are really different between images (not necessarily correlated with WT or mutant conditions), sometimes up to 10 times higher. Because of this, I can’t define one threshold value that would nicely fit all images (for ex. one single value in ‘Find Maxima’, or one single thresholding method) and give me a reliable segmentation of the granules.

Having said that, I have a few beginner questions:

  • can I try to do some sort of normalization of the pixel intensities across all images? Would this be a sound approach in image analysis, i.e., do people normally do this?
  • if so, can anyone recommend a way to do that?
  • I thought of using the ‘auto’ button on brightness&contrast, followed by ‘apply’, which would stretch the x% brightest pixels over the same range (if I understood the underlying principle of the tool correctly). Then, I could choose a threshold value for granule count and apply it to all images. Is this approach acceptable?

fyi, all images I have were taken with same acquisition parameters. WT and mutant sections sit on the same slide and go through the immunohistochemistry together (obviously :smiley: ).

Thanks a lot for the help!

ps. while writing this post I discovered the quantile-based normalization plugin (https://www.longair.net/edinburgh/imagej/quantile-normalization/). Will give it a try as soon as I can, hope it works with >20 images at once.

Hi @jubs,
the contrast normalization of images is quite standard operations, which normally consists of 3 steps:

  1. obtain mean and std value of intensities for the whole dataset
  2. obtain mean and std value for each image
  3. shift/stretch each image such that the its mean/std matches the mean/std of the whole dataset

This operation allows to normalize image intensities and compensate for photo bleaching of fluorophores in light microscopy or various staining issues in volumetric electron microscopy. As you can see it it rather simple and I bet it should be implemented in Fiji. Unfortunately, I am not familiar how you can do that in Fiji, but, for example, you can check this topic: Normalizing brightness across a image stack

In our workflows we are using contrast normalization in MIB as explained here:

Best regards,
Ilya

2 Likes

Hi @jubs,

I would add to what @Ilya_Belevich said that playing with brightness and Contrast won’t help since it only modify the display and not the data.

Nico