Compensating brightness attenuation in a confocal stack?

Hi, when you image a slide under a confocal, your image grows dimmer and with worse contrast as you go deeper into a sample.
I was wondering what kind of modern postprocessing methods people here might be using to compensate that effect (ie some z axis dependent brightness adjustment).
Thanks!

Any axial intensity adjustment will have to make some major assumptions about the underlying sample, so one additional question to ask would be “why”?

If you’re just looking to have an attractive image, and you’re willing to assert that the underlying sample was actually of homogenous intensity, you could try something simple like histogram normalization (in Fiji: image > adjust > bleach correction > histogram matching) … but note that while that will scale the intensity of deeper slices up to match the shallow slices, it will not be able to recover the lost signal-to-noise ratio in the deeper slices. So things will still look worse even qualitatively after a correction. Some acquisition software allows you to gradually increase the laser power to retain SNR as you go deeper… but with dubious quantitative application (and still a loss of contrast).

As you know, that loss in signal could be caused by any combination of causes, including bleaching, spherical aberration, scattering, and absorption… so even with more sophisticated correction methods, without an accurate model of the underlying object, any corrections you might make to improve the appearance of the image must necessarily make some major assumptions about the sample, and will therefore not be all that useful for quantification.

1 Like

Thanks for the reply. In my case I would be looking at C.elegans, worms that have a mostly symmetric body plan and are imaged sideways. I was wondering if I could use brightness variation in a few well characterized symmetric features (brightest pair of neurons) to get some adjustment for each z-plane.

If you have internal standards like that – something you “know” to be the same intensity – then you could probably devise a more rational approach to the intensity normalization. The way I’m thinking about it (and others here might have a better strategy), you’re looking to find a curve that gives the scalar value at each Z-index by which you will multiply the offset-corrected intensity in the original image to generate the corrected image. So each “standard” object (the neurons in your case) serves as a point along that curve, and you would interpolate between them for other planes. With only two objects like that, you can’t get much more than a linear curve (without making many more assumptions), which is unlikely to match the true (probably nonlinear) intensity decay cause by bleaching, aberrations, and scattering… but it might be something to play with. If you had beads embedded throughout the sample, you could get fancier.

If you want to just assume an exponential decay with depth (may or may not be a good assumption) you can use Fiji’s image > adjust > bleach correction > exponential fit … but that will use all planes (and not just your two reference neurons).

anyway, others here might have a different/better strategy… but all the approaches I can think of would make too many assumptions to be of much use for quantitation (but might yield an attractive image).