Diffraction kernel (i.e. PSF) generation pitfalls

imagej
deconvolution
microscopy

#1

I ran into something recently in generating point spread functions for deconvolution of microscopy images, and wanted to share / spark a larger discussion (per recommendations and details in this imagej-ops github issue).

Basically, I wanted to test different implementations of this algorithm by @Jizhou_LI et. al so I tried the python version, the imagej-ops java version, and as a baseline, the same kernels from PSFGenerator.

For a Gibson-Lanni kernel (aka PSF) with nearly default arguments in all 3 implementations I was finding results that looked like this, when viewed nearly orthogonal to the z-axis:

Gibson-Lanni kernels with “Particle Position” = 2um

What came to light to in the github issue was that imagej-ops automatically crops and recenters kernels while other implementations do not, which is why the results above appear so different. As part of the Gibson-Lanni model though there is a Particle Position parameter (pZ in python/java) that can be set to 0 as a test to negate the centering differences with imagej-ops as seen here, where the kernels from all three implementations are equal:


Gibson-Lanni kernels with “Particle Position” = 0um

Questions that arise from this are:

  1. Is there any reason to ever not re-center a PSF as imagej-ops does? It seems like if you don’t do this, applying the PSF in deconvolution will then translate the results away from their center in an unintuitive way which may not be a big deal when working with large volumes and lots of empty space in the margins, but sounds dangerous if there are important parts of the signal near borders.
  2. How big should a PSF be such that you can be reasonably certain that important parts of it are not being cut off as they are in the first example above by PSFGenerator?

If anyone has any thoughts/advice it would be much appreciated!


#2

Hi Eric

My experience is that with too small a PSF you miss important details, but with too large of a PSF the algorithm converges slower. This is another area where I think more publicly available experiments and data is needed.

I usually draw a line profile through the center lateral and axial slice of the PSF and crop at the point where I can no longer see any airy pattern.

I’d advise actually setting up some experiments. If you have an image for which you are reasonably confident you have the correct PSF (either measured, or reasonable estimates of meta data to generate theoretical), I’d then deconvolve the image with several different sizes of PSF, and evaluate how it effects the results.

I’m hoping to get a chance to do something similar myself in the next week or two.


#3

By slower convergence, do you mean that certain deconvolution algorithms will actually require more steps to converge if some of the information in a PSF is cropped off or do you mean that each iteration (for the same total number of iterations) is slower when using a larger PSF matrix? If the latter, are there many algorithms that don’t typically involve padding the size of the PSF back up to at least the size of the original image or possibly the next power of 2 (or multiples of primes) for the sake of using a faster FFT algorithm?

Separately, I’m wondering if it would make sense to look at the crop window size vs something like entropy of the gradients (e.g. https://arxiv.org/abs/1609.01117) within the PSF to know when the excluded portions aren’t that important. Though I guess that might be naive in a spatially/depth varying case, or with a measured PSF.

Oh also, do you have any recommendations on how to to evaluate changes to results? In other words, if you were to try an experiment with different PSFs on the same image, what would you look for in the deconvolved result? Are there quantitative measures you like to consider or is it more of an obvious difference you just know when you see it?


#4

Hi Eric

Recently I saw some cases where RIchardson Lucy seemed to take more iterations to converge if the PSF has greater support. To confirm I didn’t get fooled by a confounding factor I need to test this formally or find a paper that has done this experiment. Sometime in the near future I’d like to set up some imagej notebooks to run these types of experiments.

You always need to pad the PSF back up to at least the size of the original image in order for the Fourier multiplication to work. However the question is how much further do you extend the image??

  1. You may want to extend by half the PSF size to prevent any wrap around artifiacts (especially if you have extended objects), if this is the case, a smaller PSF will be faster, because you will have a smaller extended image size.
  2. You then may want to extend further to a “fast” FFT size, (usually power of 2, or multiple of primes).

This logic is in ops, which is part of the reason the code is so complicated. If you do not give a border size, ops will automatically extend using the PSF size, then extend further to the next fast size. Optionally you can give a border (which can be smaller than half the PSF size), a small border will make the extended size smaller, and the algorithm run faster with the trade-off being potential edge artifacts.

Do you have any images of beads of known size? You could deconvolve them, apply an automatic threshold (like Otsu) then measure the volume and compare to ground truth volume.