Re-normalisation of total image intensity after image contrast restoration (deconvolution)

imagej
deconvolution
normalization

#1

@eric-czech @bnorthan @StephanPreibisch @fjug
Hi Chaps, its summer and I have a little time to think, so i’m thinking of updating my deconvolution demo script for teaching / seminars. Link to that at the bottom of this message…
(It does @bnorthan 's ops RL, a “see how it works” implementation using only IJ builtin functions, and thirdly the IterativeDeconvolve3D plugin)

I was wondering:
Does anyone know the rationale behind re-normalizing the total image intensity of the restored image result, so it has the same total sum intensity counts as the raw image?
A paper or article or book chapter explaining why that is the right thing to do?

Does ops RL do that? Does the old IterativeDeconvolve3D plugin do it? Does CARE do it? Does YacuDecu do it? Does SPIM multiview deconv do it?

Do you think its the right thing to do, or not?
If yes, why?
If no, why not?

I’m thinking its maybe not right to do that, as we expect to restore signal into smaller features that was attenuated by the objective lens. The raw image is wrong, so why assume it has the right total pixel intensity sum value? Hmmmmmm???


#2

Hi @chalkie666

This is an interesting question. If no extension is used, and the PSF has a total sum of 1, then the total intensity before and after deconvolution should be the same.

However if the image is extended or otherwise modified to handle boundary conditions the situation is more interesting.

In reality the total energy of the reconstruction should be slightly different than the total energy of the detected image. This is because.

  1. Some photons from emitters inside the image window extend beyond the window and are not detected.
  2. Some photons from emitters outside the image window may be detected inside the image window.

So depending on the nature of the true object, the “true” reconstruction can theoretically have either less or greater intensity than the original depending on the true location of emitters.

In practice, convolution and deconvolution usually use circulant operators and the image is extended using a boundary strategy. The boundary strategy assigns a (usually) simple estimate for the intensities outside the acquisition window. During deconvolution these intensities may be assigned back into the “imaging” window, thus resulting in a higher total intensity after deconvolution.

Neither Ops RL nor YacuDecu renormalize the image. Both Ops, and the experimental ImageJ wrappers to YacuDecu extend the image before Richardson Lucy, and crop back to the original window size after RL, such that the total intensity can be different after deconvolution. Both ops RL, and my modified version of YacuDecu RL, have the option to use a non-circulant normalization factor for edge handling as described here and originally here. In that case the total intensity after deconvolution, can also be different than the total intensity before deconvolution.

Section 4.3.1 of the DeconvolutionLab2 paper also touches on some of these issues.

In summary I’m not sure renormalizing is the proper thing to do. The total photon count of a reconstruction can be different than the image because photons can be “reallocated” across the window boundaries.

Brian


#3

Thanks @bnorthan That helps me understand.

Interesting that some of the algos don’t normalize integrated intensity at the end.

You seem to be talking about small differences above though - little details, slight differences:
I am thinking about the validity of the assumption that:
“The raw image and result image “should” have close to the same total integrated intensity”
Where does the logic from that come from? Its seems intuitive… but the harder i think the less i think it makes sense physically, and light doesn’t always behave intuitively! :wink:

Thinking out loud:
We are restoring contrast of the image: The OTF tells us that lowest frequency features should not lose signal in the lens, and so remain the same after restoration, but small features lose contrast strongly in the lens, and should gain contrast strongly because the OTF at high spatial freq. is small…

I’m thinking that this should mean that the total image intensity after restoration should be much larger than in the raw image, because we have failed to catch a bunch of signal, and especially so if the object structures contain a lot of very high spatial frequency, like a field of fluorescent beads???

Trying to think another way (my maths isn’t great, but I try to imagine things visually, like Feynman explains them)
Consider the way the low and high spatial frequency info goes through the lens:
The low freq goes at a low angle so the axial (z) component of its vector is close to 100% of its size.
But high freq. info goes through the lens at a high angle, so the axial (z) component of that high angle vector is much smaller.
We are doing far field imaging, so we mostly get the z (axial) components of the photons, and not much of the lateral components… don’t know if this is physically correct… but I try to imagine the physical mechanism of how the OTF gets to be the shape it is. And it is roughly triangular…

Would this mean that since we detect only a small amount of the total real signal from high frequency features, we should be correcting that systematic error, and this is what our deconvolution algorithms in fact do. Boost high frequencies, but not the lowest one, according to the shape of the OTF.

I guess I’m still looking for a “physical justification” of why the raw and reconstructed images “Should” have the same integrated intensity, especially for structures that contains lots of fine detail, high spatial freqs.

any clues on this assumptions correctness?

cheers

Dan


#4

Hi @chalkie666

Here is my interpretation from a signal processing/convolution theory point of view. Hopefully others, more knowledgeable about the optics, will comment on that part.

Real fluorescent signals always have a constant (or ‘0’ frequency) component, ie the signal does not have negative values. I think this is illustrated nicely in your demo.

Note your chirp wave has an average value of apr. 5000. The oscilating part goes up and down between 0 and 10000.
image
After convolution the highest frequencies have been attenuated, and you are left with a (almost) flat signal of 5000. Pass band frequencies have “reduced contrast” between peak and valley.

image
So before and after convolution with the PSF the average (and sum) value of the signal is the same (allowing small differences for the energy that crossed the imaging window boundary).

It may help to think about the difference between the “sum” of a signal and the energy of a signal. A pure periodic signal has a sum of zero but a non-zero “energy”. Sum is x1+x2+x3... while energy is |x1|^2 + |x2|^2 + |x3|^2 ...

Think of a case where you had a repeating signal that is … 2 0 2 0 2 0 …, After convolution and windowing it’s 1 1 1 1 1 1. The sum is 6 in both cases, but the energy is 12 in the first case and 6 in the second.

(Note that the signal 2 0 2 0 2 0 can be represented as a combination of the constant 1 1 1 1 1 1 and periodic 1 -1 1 -1 1 -1, if you apply a low frequency transform (convolution with the a low pass kernel) the periodic part is attenuated).

Images from a counting device are a combination of a positive constant value and frequency components. It’s the frequencies that get attenuated, reducing the “energy” but not “sum” of the signal.

Does this explanation sound correct?? Does it help??