Deconvolution from true psf with beads

Hey deconvolution experts ! (@bnorthan ?)

I am planning to do some deconvolution on spinning disk confocal dataset to see if it can improve the SNR. And I have some questions related to this :

  1. We are acquiring 6 planes spaced by 300 nm each and I was wondering what should be the z spacing for the beads (175 nm) ? Should it be the same as the images (300 nm) ? Or could I decrease it in order to increase the amount of information ? In this case how the deconvolution algorithm know about the data and the psf being not homogeneous spatially ?

  2. What xy size should I crop the bead I choose to be the psf ? Should I arbitrary choose a size that contain the airy disk ?

  3. The exposure time and laser intensity we use are very low to avoid photobleaching. Could I increase it for the beads to get a better psf. All I want at the end is the shape of the true psf, so more photons I get, better is it right ? Or should I stick to the parameters we use ?

  4. Should I average multiple beads ?

  5. Last is about software : how your RLTV implementation in ops @bnorthan performs against the one from the DeconvolutionLab ? Did you compare both implementations on the same dataset ? Is the output the same ? In term of speed, which one is the fast ? Do they behave the same on the boundaries ? If the results are similar, I’ll probably go for the ops version, as it is much more easier to script (and ops are cool !).

(we are using PS-Speck Microscope Point Source Kit from Invitrogen)

Thanks.

1 Like

Hi @hadim my main skillset is the implementation of these algorithms on a computer. So I highly encourage others, especially those who have more experience on the practical side (ie actually at the scope) to add wisdom where they can. That being said:

We are acquiring 6 planes spaced by 300 nm each and I was wondering what should be the z spacing for the beads (175 nm) ? Should it be the same as the images (300 nm) ? Or could I decrease it in order to increase the amount of information ? In this case how the deconvolution algorithm know about the data and the psf being not homogeneous spatially ?

The spacing of the PSF and image should be the same. If you don’t acquire them at the same spacing, you will need to downsample before deconvolution. What is the theoretical lateral and axial resolution of your system??

Also acquire twice as many planes (twice as large axial range) for the PSF. The “non-circulant” boundary handling approach can take advantage of this information. You can crop the PSF if using another approach.

This is a bit tricky. You need to make sure you get the last out of focus airy disc. I think this is something we can discuss further later, if you are able to share the bead image.

More photons is better. The parameters that determine shape of the PSF (wavelength, spacings, NA, etc.) should be the same as the real experiment. The parameters that control noise can be different, so that you get a PSF with high SNR.

Yes, if you can align them properly. You also will need to subtract background from the PSF, and normalize it to unit energy. There is also an approach that Huygens uses called “PSF distilling”. Basically it is a reverse deconvolution (you know the shape of the beads, so can make a “fake” image, and solve for PSF instead). Maybe we can come up with some nice PSF preprocessing scripts.

Ops Deconvolution needs continued testing. I have verified results against deconvolution lab for basic use cases. I believe my edge handling may be a bit different. I also implemented a “non-circular” approach for edge handling. This needs to be tested further.

At this point I recommend we use both, And verify them with each other. There is supposed to be a new version of Deconvolution Lab out soon. The website still says “coming in October”. However I don’t see it there yet. If it is not released yet maybe they will give us an advance copy.

The purpose of Ops Deconvolution is to re-invent the wheel, using slightly different tools, imglib2, ops, imagej2 and all BSD licensed. So we want to make sure it is behaving the same as known implementations.

FYI - I am not sure about speed right now. My long term plan, as of right now, is to implement some algorithms using ND4J, sister product of DL4J. ND4J is a wrapper around different optimized native math libraries, and it is supposed to be very easy to switch the underlying engine to GPU. I’ve had success with the ND4J library for other projects.

One thing I have implemented, is an “acceleration” scheme. Basically that means at each iteration the algorithm takes a bigger step, and can complete in many fewer iterations. This still needs to be tested.

Anyway, this is a great opportunity to further verify, develop and test Ops Deconvolution, and gain experience with other Deconvolution tools (especially DeconvolutionLab2). I’ll be happy to give you whatever help you need.

6 Likes

Can you please clarify if the PSF needs to have the same stack dimensions as the sample data? I have averaged beads and launched the OP via the suggested PY script. If I use the avg PSF, I immediately get a litany of error messages. If I run the data against itself, I see a progress bar and then an output stack with all zeros. I’ve made the avgPSF same XYZ, tried 16 or 32bit, tried to normalize to max, and can’t get it to run.
The error log is:

Started OPS deconv v0.py at Fri Dec 16 14:05:08 CST 2016
Traceback (most recent call last):
  File "C:\Program Files (x86)\ImageJ\macros\OPS deconv v0.py", line 6, in <module>
    deconvolved=ops.deconvolve().richardsonLucyTV(image, psf, 10, 1.0);
    at net.imagej.ops.thread.chunker.DefaultChunker.run(DefaultChunker.java:104)
    at org.scijava.command.CommandModule.run(CommandModule.java:205)
    at net.imagej.ops.OpEnvironment.run(OpEnvironment.java:940)
    at net.imagej.ops.OpEnvironment.run(OpEnvironment.java:156)
    at net.imagej.ops.map.MapUnaryComputers$IIToIIParallel.compute1(MapUnaryComputers.java:101)
    at net.imagej.ops.map.MapUnaryComputers$IIToIIParallel.compute1(MapUnaryComputers.java:87)
    at net.imagej.ops.copy.CopyRAI.compute1(CopyRAI.java:91)
    at net.imagej.ops.copy.CopyRAI.compute1(CopyRAI.java:55)
    at net.imagej.ops.deconvolve.RichardsonLucyC.compute2(RichardsonLucyC.java:169)
    at net.imagej.ops.deconvolve.RichardsonLucyC.compute2(RichardsonLucyC.java:75)
    at net.imagej.ops.filter.AbstractFFTFilterF.compute2(AbstractFFTFilterF.java:241)
    at net.imagej.ops.filter.AbstractFFTFilterF.compute2(AbstractFFTFilterF.java:68)
    at net.imagej.ops.special.function.BinaryFunctionOp.run(BinaryFunctionOp.java:78)
    at net.imagej.ops.special.function.AbstractBinaryFunctionOp.run(AbstractBinaryFunctionOp.java:62)
    at org.scijava.command.CommandModule.run(CommandModule.java:205)
    at net.imagej.ops.OpEnvironment.run(OpEnvironment.java:940)
    at net.imagej.ops.OpEnvironment.run(OpEnvironment.java:156)
    at net.imagej.ops.deconvolve.DeconvolveNamespace.richardsonLucyTV(DeconvolveNamespace.java:369)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)

java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: net.imglib2.type.numeric.integer.UnsignedShortType cannot be cast to net.imglib2.type.numeric.real.FloatType

    at org.python.core.Py.JavaError(Py.java:495)
    at org.python.core.Py.JavaError(Py.java:488)
    at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:188)
    at org.python.core.PyObject.__call__(PyObject.java:345)
    at org.python.core.PyMethod.__call__(PyMethod.java:170)
    at org.python.pycode._pyx4.f$0(C:\Program Files (x86)\ImageJ\macros\OPS deconv v0.py:6)
    at org.python.pycode._pyx4.call_function(C:\Program Files (x86)\ImageJ\macros\OPS deconv v0.py)
    at org.python.core.PyTableCode.call(PyTableCode.java:165)
    at org.python.core.PyCode.call(PyCode.java:18)
    at org.python.core.Py.runCode(Py.java:1275)
    at org.scijava.plugins.scripting.jython.JythonScriptEngine.eval(JythonScriptEngine.java:76)
    at org.scijava.script.ScriptModule.run(ScriptModule.java:177)
    at org.scijava.module.ModuleRunner.run(ModuleRunner.java:167)
    at org.scijava.module.ModuleRunner.call(ModuleRunner.java:126)
    at org.scijava.module.ModuleRunner.call(ModuleRunner.java:65)
    at org.scijava.thread.DefaultThreadService$2.call(DefaultThreadService.java:191)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: net.imglib2.type.numeric.integer.UnsignedShortType cannot be cast to net.imglib2.type.numeric.real.FloatType
    at net.imagej.ops.thread.chunker.DefaultChunker.run(DefaultChunker.java:104)
    at org.scijava.command.CommandModule.run(CommandModule.java:205)
    at net.imagej.ops.OpEnvironment.run(OpEnvironment.java:940)
    at net.imagej.ops.OpEnvironment.run(OpEnvironment.java:156)
    at net.imagej.ops.map.MapUnaryComputers$IIToIIParallel.compute1(MapUnaryComputers.java:101)
    at net.imagej.ops.map.MapUnaryComputers$IIToIIParallel.compute1(MapUnaryComputers.java:87)
    at net.imagej.ops.copy.CopyRAI.compute1(CopyRAI.java:91)
    at net.imagej.ops.copy.CopyRAI.compute1(CopyRAI.java:55)
    at net.imagej.ops.deconvolve.RichardsonLucyC.compute2(RichardsonLucyC.java:169)
    at net.imagej.ops.deconvolve.RichardsonLucyC.compute2(RichardsonLucyC.java:75)
    at net.imagej.ops.filter.AbstractFFTFilterF.compute2(AbstractFFTFilterF.java:241)
    at net.imagej.ops.filter.AbstractFFTFilterF.compute2(AbstractFFTFilterF.java:68)
    at net.imagej.ops.special.function.BinaryFunctionOp.run(BinaryFunctionOp.java:78)
    at net.imagej.ops.special.function.AbstractBinaryFunctionOp.run(AbstractBinaryFunctionOp.java:62)
    at org.scijava.command.CommandModule.run(CommandModule.java:205)
    at net.imagej.ops.OpEnvironment.run(OpEnvironment.java:940)
    at net.imagej.ops.OpEnvironment.run(OpEnvironment.java:156)
    at net.imagej.ops.deconvolve.DeconvolveNamespace.richardsonLucyTV(DeconvolveNamespace.java:369)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:186)
    ... 17 more
Caused by: java.util.concurrent.ExecutionException: java.lang.ClassCastException: net.imglib2.type.numeric.integer.UnsignedShortType cannot be cast to net.imglib2.type.numeric.real.FloatType
    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
    at net.imagej.ops.thread.chunker.DefaultChunker.run(DefaultChunker.java:98)
    ... 39 more
Caused by: java.lang.ClassCastException: net.imglib2.type.numeric.integer.UnsignedShortType cannot be cast to net.imglib2.type.numeric.real.FloatType
    at net.imglib2.type.numeric.real.FloatType.set(FloatType.java:50)
    at net.imagej.ops.copy.CopyType.compute1(CopyType.java:56)
    at net.imagej.ops.copy.CopyType.compute1(CopyType.java:45)
    at net.imagej.ops.map.Maps.map(Maps.java:297)
    at net.imagej.ops.map.MapUnaryComputers$IIToIIParallel$1.execute(MapUnaryComputers.java:106)
    at net.imagej.ops.thread.chunker.DefaultChunker$1.run(DefaultChunker.java:78)
    at org.scijava.thread.DefaultThreadService$1.run(DefaultThreadService.java:174)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    ... 4 more

It doesn’t need the same dimensions, however they both should have the same number of dimensions and axis types.

Verify the PSF is 3D (if your image is 3D).

If you are able to share your images I can take a look.

1 Like

@hadim did you ever acquire bead images and biological images for your experiment?? If so are you able to share them?? I’ve been looking into ways to extract the PSF from a bead image, and also plan to test out DeconvolutionLab2 soon, and verify the ops deconvolution implmentations against it. It would be great to have an example on best practices to extract a psf from beads. A similar question just came up on the other listserv.

I’ll try to find some beads images this week.

1 Like

@bnorthan I didn’t push very far on the deconvolution and so I can’t find beads + images. Moreover, we recently updated our two microscopes so they probably won’t be relevant anymore. With the new microscopes, we are planning to setup a deconvolution pipeline since we do not have a confocal system.

My plan is to test and compare the different available deconvolution algorithms/softwares. Among them Deconvolutionlab2 and Microvolution.

I’ll post here sample images and beads as soon as I get some.

1 Like

Hi @bnorthan, hi @hadim,

I’m coming back to this topic as I would like to find out how deconvolution can be done using Ops.

Did you do nice PSF preprocessing scripts in the meantime? Or do you know if there is a script/plugin which allows me to average beads to get a PSF image?

Thanks in advance!
Robert

Hi @haesleinhuepf

I just added a couple of new deconvolution scripts.

ExtractPSF is based on the Huygen’s PSF Distilling process. The link points to a leica page, describing extracting the PSF for STED because sadly, the Huygens wiki, now seems to be password protected (it used to be public). It had a lot of good information on it, I guess this is more motivation to keep developing the ImageJ deconvolution related wiki pages.

PreprocessPSFDeconvolve takes an image and psf, crops psf to a smaller size, and subtracts the background. Crop size, and background level, should be set, such that the resulting PSF, only has “PSF” signal and not noise.

Let me know if you are able to run these scripts, and if not, let me know what problems you encounter. If you are able to share the bead images and related biological samples, I would be happy to run them myself.

Note: I just fixed a couple of bugs that affected the non-circulant/non-periodic mode. I’ll check those in today. In the meantime, run with non-circulant=false.

4 Likes

Hi @bnorthan,

awesome! Thanks for the links! I’ll try the stuff out. And furthermore, I shall have access to the Huygens forum as we do have a Huygens license as well.

Thanks again!
Robert

2 Likes

Sounds great. At this point I am hopeful the ops deconvolution produces results close to Huygens deconvolution, assuming the same PSF. If it does not, there is probably something wrong in the ops code. Most commercial deconvolution is based on Richardson Lucy, with enhancement for speed and noise handling. Ops deconvolution should be close to the same algorithm.

If you want to try a theoretical PSF I recommend PSF Generator.

At this point ops deconvolution is somewhat slow. I am in the process of benchmarking different FFT libraries (deconvolution speed is proportional to FFT speed) and will hopefully have a faster version in the coming months.

2 Likes

Over the last couple of days I ran ops deconvolution on a public test image of a C. Elegans Embryo. Results are here. It would be great, if people testing deconvolution, try deconvolving this image with other tools, and post results in the forum. Best way to verify an algorithm is to compare the results to other implementations.

The processing was a bit slow in my mind. So I need to profile the ops code, eliminate any accidental bottlenecks, and benchmark various FFT libraries then create a fast native version if required. Native CPU and GPU should be easy to implement together, as tools like JavaCPP presets have wrappers for all of FFTW, MKL, and CUDA, so implementing multiple native implementations potentially could be done efficiently.

Another problem was Spherical Aberration. That can be handled by having better tools to generate PSF. It’s on the roadmap.

Hi Guys,

I would like to jump in as well. I will be acquiring some bead images soon, probably next week so I may share some. I have one question regarding (XY)Z-spacing though. I have been thinking that PSF images should have the same resolution as the deconvolved image (same as @bnorthan has pointed out). However, guys at our LM facility pointed out that I should acquire bead images with as high resolution as possible, to get a better PSF estimate. wouldn’t it make sense to acquire high resolution bead images, average them and only downsample then if needed? How many bead images should one acquire? I have been thinking 10-ish from different places in the FOV, but also heard numbers as high as 30-40…

Cheers,

Radek

I am on the programming and image processing side of things, as far as I know, images just appear magically over ftp (or dropbox these days). So I’m not the guy to ask about the best way to do experiments on the microscope. Hopefully others will chime in.

That being said, one reason it may be a good idea to acquire beads at high resolution, is that you could potentially re-use them for different image acquisitions with XYZ-spacings of equal or lower resolution (by downsampling).

1 Like

From microscopy side: a guide of how to prepare and acquire bead PSFs can be found e.g. on the website of PSFj:
http://www.knoplab.de/psfj/online-manual/
In our facility we are using gold beads in the reflection mode. The advantage is that you can use sub-resolution sized beads that still give a good signal.

3 Likes