I’m trying to replicate the 3D PSF of my optical system, measured with beads, with a theoretical (i.e. model) PSF. What I’m looking for right now is a way to introduce a slight degree of astigmatism in a PSF stack. Is anyone aware of a transformation that can approach the desired effect: distortion along one axis above the PSF center and along another axis below it? Thanks
Good day Lorenzo,
apart from the fact that reasonable PSF determination is a complicated issue, at least if it is done experimentally, your question can in principle be answered.
Astigmatism, i.e. a lens system with a cylindrical element, in the simplest case can be simulated by a convolution of the circular symmetric PSF with a line element of suitable length.
Hi Herbie, and thanks for your reply. Yes, it has been rather complicated and despite succeeding in measuring my PSF with beads I’m still struggling to get decent deconvolution results. Going back to my question, I didn’t think of the approach you are sugesting. I suppose I would have to vary the line direction above and below the PSF center to obtain what I see in my measured PSF?
all you need to know is optics but this is part of physics and I fear you don’t have time to complete such study …
In short you need to know what an ideal cylindical lens does arround its focus.
An ideal cylindical lens (without diffraction) shows a line instead of a focus point.
The fact that an ideal spherical lens shows no focus point but an extended PSF is due to diffraction.
The rest is up to you, but keep in mind that a theoretical PSF generally does not include lens abberations etc. and that the proposed simulated introduction of the cylindrical element is only a first approximation.
I’m aware of the basics of astigmatism. What I’m looking for is not a full fledged optical model but to obtain a first order empirical approximation of what I see in my PSF. An example of the effect is here https://aemstatic-ww2.azureedge.net/content/dam/bow/print-articles/2013/issue-4/1307BOWjaisaitisF1.jpg
Thanks for your input.
[…] a first order empirical approximation of what I see in my PSF.
That’s what I’ve tried to provide:
[…] the introduction of the cylindrical element is only a first approximation.
You did and again thank you for that. I had an additional question:
I suppose I would have to vary the line direction above and below the PSF center to obtain what I see in my measured PSF?
Yes, this is an effect of the combined optics, spherical and cylindrical, and that’s what I tried to touch with my earlier post about optics in general.
Of course, for the combined PSF, the line length depends on z.
I wrote a script to extract the PSF from an image of several sub resolution beads. The script is loosely based on the process that is used in Huygens Deconvolution to extract the PSF. They call it “PSF Distilling”. It’s produced a good PSF for several of my use cases. You might give it a try and see if it works on your bead image.
Hi bnorthan and thanks for the script.
At this stage I’ve already completed the process of distilling a psf from bead images with the latest beta version of the GDSC SMLM “psf creator” plugin by Alex Herbert. The whole procedure worked well but the final distilled PSF is not of a high quality due the low S/N of my bead image stacks: I averaged data from more than 100 beads with a lot of improvement, but still not enough.
I thus attempted to circumvent the problem by fitting a theoretical PSF to my measured PSF by adjusting optical parameters iteratively. I’ve already obtained good deconvolution results with this approach, but I wanted to squeeze the last bit of improvement by incorporating a slight amount of astigmatism present in my distilled PSF.
Are you able to share your bead and PSF image?? I’m surprised that averaging 100 beads does not give a usable PSF. Perhaps there is an offset gain that is not being compensated for??
Have you tried deconvolving your bead image, with the extacted PSF?? That is a good sanity check, as the beads in the deconvolved image, should be much more point like and symmetric.
Also how big is your output PSF?? A good rule of thumb is that your PSF should be centered and large enough that it has decayed to almost zero by the edges. The PSF used for deconvolution, will be extended such that it is the same size as the image when performing FFTs in the deconvolution algorithm. If your PSF still has signal near the edges you can end up with edge artifacts in the PSF.
Sure I can, here you’ll find links to beads in agarose stack 1.tif, which is one of several input files that I used. The final measured psf v1 (it’s the average of 100+ bead images). And finally the fitted model psf v1.
From the distilled psf I did remove the background and it works reasonably well when deconvolving bead images. However, when I use it on real data with very bright fluorescent spots I get an unacceptable amount of ringing artifacts, which I attribute to residual noise at the edges.
Acquisition parameters are widefield mag. 4x, NA 0.1, air objective (Leica). Beads are 1µm latex in agarose covered with water. For deconvolution I am using Deconvolutionlab2 RL 100 iterations.
Thanks a lot for the links
I noticed a number of possible reasons the PSF you generated may not of worked. The first thing I looked at was the background level of the measured PSF. It seemed there was a small (2e-4 or so) background level, while it may seem insignificant, it could effect the result.
Thus I subtracted 2e-4 from the PSF, and set any negative values to zero. This is a fairly crude way of getting rid of the background, but, atleast in my experiment it helped get better deconvolution in the axial direction for the beads. Below are axial views of the result I got with “measured psf v1.tif” and “measured psf bgs v1.tif”.
I’d be curious whether the background subtracted PSF improves the result with the real image (it is entirely possible there is a different problem with the sample, perhaps sample induced aberrations).
All scripts I used and results I obtained are here - https://www.dropbox.com/home/Deconvolution_Test_Set/From%20Lorenzo%20IJ%20Listserv
Note that I used ‘ops scripts’ to produced the results. However the results should be similar to the DeconvolutionLab2 results, and in fact DeconvolutionLab2 is much easier to use and is more mature (ops is still in beta).
The background subtraction script is as follows
# @OpService ops # @ImgPlus beads # @OUTPUT ImgPlus backgroundSubtracted # subtract background from psf for t in beads: val=t.getRealFloat()-2e-4; if val<0: val=0; t.setReal(val);
And the deconvolution script
# @OpService ops # @Dataset data # @Dataset psf # @Boolean(value=True) nonCirculant # @Boolean(value=True) acceleration # @OUTPUT ImgPlus deconvolved_ from net.imglib2.util import Intervals from net.imagej.axis import Axes from net.imglib2.type.numeric.real import FloatType from java.lang.Math import floor psf_=ops.convert().float32(psf); # normalize psf sumpsf=ops.stats().sum(psf_); sumpsf=FloatType(sumpsf.getRealFloat()); print sumpsf psf_=ops.math().divide(psf_, sumpsf); # convert image to 32 bit img_=ops.convert().float32(data.getImgPlus()); # now deconvolve deconvolved_=ops.deconvolve().richardsonLucy(img_, psf_, None, None, None, None, None, 30, nonCirculant, acceleration)
I really appreciate the help!
I didn’t want to cut too much background to avoid affecting weak airy rings away from the psf z-center. The value 10-4 was a sort of compromise… However I’ll definitely test your variant on real data as soon as I get back to my analysis mac at the office later in the week! (hopeless task on my macbook while travelling)
I’ll get back to you.
Hi again, I finally tried your suggestion by removing a constant background of 2e-4 from my measured psf and deconvolving a real dataset. The results are greatly improved as you had anticipated and ringing is gone!
Surprisingly, the final results are very similar to those obtained with the model psf, to the point that I’m uncertain on which one to use in my routine. The model psf does not incorporate any astigmatism, but the measured psf degrades very quickly in S/N above and below the psf z-center. However, these differences do not seem to matter very much in terms of final deconvolution results. Do you have any experience on this aspect?
In my experience deconvolution results done with a good measured PSF are “better” than deconvolution results obtained with a good theoretical PSF, however the differences are not dramatic.
(As a side note far as I am aware there is surprisingly little work done in examining the performance of deconvolution using different types of PSFs, and different preprocessing steps for the PSF.)
In your case it isn’t black and white, becasue both PSFs are sub-optimal due to SNR of the measured, and difficulty modelling aberrations in the theoretical.
If you are generating an image for human consumption, then I would simply choose the image that looks better subjectively. This is OK, if the point is to generate images that a human will be interpreting.
If deconvolution is part of an image processing chain, and you have a way to evaluate the performance of the image processing protocol, then I’d try both PSFs, in the image processing chain, and use the one that results in better performance.
Thanks Brian, I agree, I’ll stick with what looks best and my impression is that the model psf has a slight edge over the measured psf in terms of contrast. That however could just be due to differences in sensitivities to the number of iterations in the RL algorithm: the last test I want to do is to vary this number and see what the optimal value is for each of the two psfs…