Normalizing intensity on brain tissue fluorescence images (confocal microscopy)

Hi everyone,

I acquired fluorescence images using stained brain tissue sections (25µm thick) with antibodies that targeted the marker of neuronal activation c-Fos. I also have an endogenous signal (thanks to Cre transgenic mouse lines, in green), that is characteristic of a cell population.

I want to colocalize the cells where I have both an activity signal (red signal) and the cell population marker (green signal).
However, I have a lot of images, so I need to normalize them to be able to do the same automatized counting.

I tried Iterative deconvolution because there’s a strong noise in my images, but I’m loosing intensity and information.

  1. Do you have any tips to avoid this loss of intensity, or is it unavoidable with Iterative Deconvolution ? Do you know if the fact that I can’t use Iterative Deconvolution on RGB Pictures (because I need 8bit images for this preprocessing) is a source of infomation loss ?

  2. Do you know a way of normalizing the image intensity, but also removing the background and enhancing the cell contrasts without loosing information (applying the less filters possible) ? Is Iterative Deconvolution normalizing the intensity (as well as enhancing the image properties?) I might use HALO for the counting, which mainly needs intensity normalization.

  3. Do you think I should use another software than ImageJ (e.g IMARIS) for preprocessing, because ImageJ is not dedicated to the treatment of fluorescence confocal images ?

Thank you in advance for your help,

Hello Baptiste

I take it these are confocal Z stacks.
I would suggest knocking down noise with a Gaussian blur filter radius 1.0 or less prior to deconvolution or other processing steps (like thresholding). Deconvolving RAW intensity data, assuming collected >8bit, will yield better results. Deconvolution should increase SNR, not decrease it. You can consider a simple unsharp mask filter if all you want is better contrast for segmenting cells. It’s very fast, though you need to test a few blur and percentage values to get something optimized for your data. [a go-to start can be sigma 3 and 0.45 for strength; this works well on most things]
Regarding normalization, there are built-in tools to “enhance contrast” that might be what you seek, but be careful using them since vanishing signal will typically lead to poor segmentation later on as background replaces signal. You can try using the ‘stack histogram’ option to limit the amount of normalization in slices that have low signal. Of course, the data are now solely for qualitative purposes since intensity ranges are now made arbitrary.
You will need to decide if you can count 2D or need 3D methods. Either will critically depend on the ability to segment the cells and how well separated they are in 2D and 3D space. Getting accurate counts can prove difficult, and any automated methods must be verified by hand counting. Consider stereology approaches!
Hope this gets you on the right path…


Hello Vytas,

Thank you so much for answering!

Actually these are not Z stacks (I’m aware that it would have been a good idea but I had too many samples to analyse, and not enough time)).
So I’m doing 2D Deconvolution (using a single stack PSF).[quote=“Vytas, post:2, topic:5117”]
I would suggest knocking down noise with a Gaussian blur filter radius 1.0 or less prior to deconvolution or other processing steps (like thresholding). Deconvolving RAW intensity data, assuming collected >8bit, will yield better results. Deconvolution should increase SNR, not decrease it.

Thank you very much for the tips.
So I tried prepropecessing prior to deconvolution and here you have some results :

Raw + Deconvolution - - - - - - - - - - - - Gaussian Blur 1 + Deconvolution- - - - Raw Image

As you can see, blurring works so well!
But it looks like I have less light to be able to distinguish cells. It’s a problem insofar as HALO is no more able to recognize cells it used to doing on the raw image (even if it had troubles with the contouring, which is the reason why I thought about deconvolution).

I have a lot of images to analyze with HALO, which applies a manual threshold to all of my images. I think I need to match their histogram (so they have more or less the same intensity profile).

Do you think that the CLAHE pluggin could work to do that ?

I guess I can’t use it since I don’t have stacks.
However, I want to normalize intensity on a batch of images (not on a single image).

And indeed, I just want to colocalize cells (so I don’t need to measure intensity).

Thank you for all you advices,

Hi @Baptiste

What could be happening is that after deconvolution you have a higher max intensity. If the image is rescaled for visualization it can appear that some structures have become dimmer, but this may not be the case. It is actually the visualization range that has changed. You can check this by drawing line profiles over the object in the original image, highlight the deconvolved image press CTRLSHIFTE to copy the ROI, then take a line profile (CTRLk) on each

Are you able to share the raw images and the PSF estimate that you used?? If so I can take a look, and examine what is happening before and after deconvolution, in terms of the image intensities.



Hi @bnorthan,
Thanks for your reply.

These are the original colored channels (non lossy PNG format) I used for deconvolution (I didn’t use the blue channel, aka DAPI staining, because it’s too bad).

Green channel:

Red channel:

The PSF estimate:

Merging the two deconvolved colored channels (without preprocessing) :

Thank you so much in advance.
I just have a question: whatever is the change in intensity that is produced by deconvolution, is it enough for normalizing intensity through a batch of images ? (different brain sections of the same mouse)


Hi @Baptiste

One of the issues was that there was a scale bar in your image. The scale bar was an artificial structure with bright (255) pixels. After deconvolution the scale bar pixels got even brighter and made the rest of the image look dim as the maximum visualization intensity was chosen based on the scale bar values. If I crop out the scale bar I get a much better visualization (it doesn’t change the raw pixel values).

I used the imagej script editor ‘DeconWithGaussian’ example, with sxy of 2 and 20 iterations, to get this result. However I think if you crop out the scale bar, the visualization of your result should look better. As I mentioned it is useful to draw and look at line profiles of the raw and deconvolved and inspect the actual intensities.