Deconvolution Library

imagej
deconvolution

#1

Recently, I have been working on an image deconvolution library built on top of Tensorflow as an easy way to get support for full GPU acceleration of deconvolution algorithms.

The project is currently being maintained here:

My intention with the project was to be able to perform efficient deconvolution and dynamic PSF generation as a part of large, batch oriented workflows. What I wanted to check though was whether or not it would make sense to potentially incorporate something like this as an ImageJ plugin, or something else that would make it more friendly for those that don’t like scripting/programming.

Recently @hadim mentioned that @bnorthan was working on something very similar as a part of imagej-ops (or at least imagej-ops-experiments) so perhaps this is unnecessary, but I thought it would be worth bringing up especially since Tensorflow seems to be a solid way for those with no GPU-programming experience (like myself) to be able to build GPU-accelerated applications.

My questions then are:

  1. Is this worth pursuing as part of ImageJ or related projects? And if so, how can I help?
  2. Has Tensorflow been a dependency of any other ImageJ plugins? I saw that it’s a managed dependency within the scijava parent artifact so perhaps it wouldn’t be entirely new to certain builds, but I’m not sure to what extent anything else has used it. Either way, hopefully it would constitute a pretty negligible increase in the complexity of the build and if anyone already has experience using it as part of ImageJ I’d be curious to hear how it went.

#2

Hi @eric-czech

This sounds really interesting, it would be great to have this approach added to image-ops. A potential starting point is to make ops-experiments multi module (@ctrueden, and @hadim already suggested this) and start a ops-tensorflow project. You could then work on this part relatively independently.

If the approach seemed promising we could then discuss how to transition it to a more polished form. Which would involve sorting out dependencies for every operating system, making sure it builds on every OS, and writing converters.


#3

Hi @eric-czech

I just got a chance over lunch to read over the readme on your project. It’s really detailed and has a lot of great information. A couple of quick points.

  1. Performance: What specific implementations did you use for your benchmarks?? I’ve benchmarked several implementations of the Richardson Lucy algorithm and got the following results

ImageJ Ops (Java) - 200 seconds
DeconvolutionLab2 - 100 seconds
DeconvolutionLab2 with JCUFFT - 80 seconds
Pure c++ with MKL libraries - 10 seconds
Cuda (YacuDecu) - 2 seconds

More benchmarking is definitely needed. I think the Java implementations probably have bottlenecks that could be addressed (ImageJ-ops RL was implemented 2-3 years ago, and probably needs to be profiled and updated).

  1. I didn’t implement Blind deconvolution in ops, because there is not a lot of evidence that it converges for Microscopy images. Note that there is no blind deconvolution in DeconvolutionLab2, Huygens or most of the other commercial products for similar reasons (read the section in the DL2 paper on Blind deconvolution, it goes over the issues). Note there is an open blind deconvolution algorithm in Matlab if you ever want to experiment with it.

#4

Hey @bnorthan,

re:Performance

It’s hard to get a perfectly aligned comparison of the two but I put some of my earlier tests together into something a bit more cohesive in this DeconvolutionLab2 Comparison Notebook.

The punchline there is on a 1024x1024x11 volume:

  • Flowdec takes about .6-.8 seconds
  • DeconvolutionLab2 w/ JCUFFT takes about 40 seconds

I’m also working with some partners of our lab that use a commercial implementation (I’m probably not supposed to say which one so I’ll err on the side of being vague) but I’ve been told and seen some evidence that the running times are comparable to DeconvolutionLab2, which makes sense given that we at least know it does the same thing with just FFT/iFFT on GPU.

Not sure where our numbers differ there (GPU generation, volume size, iteration count maybe?) but it looks like we’ve got about the same ratios at least. I’d be happy to try it with whatever settings/data you were using though if you’d like.


re: Blind Deconvolution

As far as blind deconvolution goes I haven’t worked with that myself either but one of our partners is using it as part of their commercial solution for certain oil immersions where they mentioned seeing some evidence that they believed the blind results were better than using a theoretical PSF model with a poor fit. That’s great to know though that it’s not something that people have come to expect (thank you for the context!).

It does seem like a cool problem space though – did you ever experiment with that Matlab implementation for anything? I was imagining that if there is going to be a big break through that would make it more mainstream, microscopy-specific neural nets might be a good bet.


#5

And re: ops-tensorflow, sounds good to me! I’d be happy to start experimenting with it given a framework to work under.


#6

Hi Eric

There is already an Imagej-tensorflow project, which probably has some code in it that is relevant to you. I haven’t had a chance to play with it yet, but that’s the place for you to look to see what facilities are available to make TensorFlow plugins with.

In addition, I added a skeleton for an ops-experiments-tensorflow project. Feel free to fork it, experiment with the tensorflow deconvolution and then make a pull request.

As I see it to integrate your code into ImageJ2 as an op and command you;ll need to do the following

  1. Set up the dependencies needed for tensorflow in the .pom
  2. If required set up special build steps in the pom.
  3. Add any required “glue” code to call the tensorflow workflow (for the JavaCpp Cuda wrapper you need a special wrapper class)
  4. Add a Utility class to convert imglib structure to whatever tensorflow needs (see ConvertersUtility) NOTE: This may already be done in the ImageJ-tensorflow project. If not, and If we get to the point where we want to make a polished tensor flow deconvolution implementation, we can make IJ2 Tensorflow converters.
  5. Make an op for your code, for example here is the YacuDecu cuda op.
  6. (optional) Make a command, optional, but it is an easy way to add a menu item and a simple GUI harvested from the parameters. Commands can also be imported into KNIME as nodes.

Let me know if you have questions.

Brian


#7

If I had to guess the “mystery” product you are talking about is perhaps Autoquant?? If so I’m familiar with that product, because I used to work for Autoquant/MediaCybernetics several years ago. I wouldn’t worry about mentioning the name of the product. The algorithms are all published and there is even an attempt at an open source implementation of their constrained blind algorithm (It’s doesn’t look like it’s maintained, but it’s there).

The issue with blind deconvolution, is that unconstrained the image and PSF drift towards the trivial solution. That is the PSF is an impulse and the image, the original image. For example see the following dropbox folder with matlab code and test images (images from DL2 site). (I think I did this right but let me know if there are any errors or mistakes in my code).

Due to this issue, blind deconvolution is usually constrained . The constraints are actually quite specific (you need the full set of scope data, spacings, NA, RI, etc. to form the constraint and initial guess). Thus one obvious question, is how “blind” the algorithm is, given the tight constraints. The second question, is whether the blind algorithm produces a better image. There is not a lot of hard data on this, however one paper, that examined Autoquant and Huygens and measured the volume of deconvolved spheres, showed that Autoquant fixed PSF deconvolution produced equal to slightly more accurate measurements as compared to Blind. And in fact Huygens fixed PSF deconvolution produced even more accurate measurements (an average 10 um^3). Though the Huygens data is only in the discussion, not the table.

It’s interesting that your partner is seeing some evidence that shows the blind results are better, do you know what fixed deconvolution techique they used for comparison?? Can they share their data?? Or atleast the original images?? Blind deconvolution with appropriate first guess and constraints, will give a better image than “poorly fitted PSF model”, but a fair comparison would look at Blind Deconvolution, vs deconvolution with an accurate theoretical PSF, and vs deconvolution with a good measured PSF.

Also, to isolate the effect of initial guess from the “Blind” part, you would really need to show data on the complete co-convergence of image and PSF (that is Image and PSF at every iteration).

As you mention it’s possible there could be a breakthrough using NN or other technique. It will be interesting to follow.

Brian


#8

Oh nice! So I’m following in looking at your example and the docs for deconvblind you mentioned earlier, would you say it’s fair to characterize deconvblind as very weakly constrained (by dampar and maybe readout too) whereas that Autoquant algorithm is better constrained for microscopy data specifically? Seems obvious based on what you said but I wanted to come away with the correct impression.

That’s great context though how much worse the blind results look. Do you know if they improve to something more like the non-blind result with a greater number of iterations? I’d try it myself but I don’t have a Matlab license at the moment.

Regardless, I’m sold on the idea that fixed PSFs are likely better especially if constraining an accurate blind deconvolution algo requires collecting a lot of the same operationally burdensome parameters that come with fixed PSFs (spacings, NA, RI, etc.) in a high-throughput use case. I pinged our partner about getting some more info on their experiences in the blind deconvolution realm but haven’t heard anything yet. I’ll certainly share it though if they’re willing to do that at some point!


#9

Hi Eric

Interesting conversation. One thing I should mention is that deconvolution results can be affected by many factors, first guess of image, first guess of PSF (including estimates of aberrations), regularizaiton, algorithm acceleration, edge handling, etc. So the differences in the results from Autoquant and other deconvolution algorithms, could be because of the “blind” part, or could be because of something else entirely. Its pretty complicated.

I think in the Matlab code it is only the image that is constrained by the dampar and readout terms. The only constraint on the PSF is normalization. I could be wrong on that though. I have not had a chance to step through the deconvblind code, and the documentation seems ambiguous.

But yes, the Autoquant algorithm is better constrained using atleast a frequency space constraint, and an spatial hourglass constraint and possibly others.

Unconstrained the results actually get worse with a greater number of iterations. The best result “seems” to be somewhere around 20-30 iterations (I did 50). This is why I mentioned the “co-convergence” data is needed to properly analyze the blind algorithm. Since the PSF and image are both changing they can effect each other. For example let’s say at iteration N, the PSF converges. However, that means, that until N, the image was being deconvolved with a non-converged PSF.

Thus it may make sense to continue deconvolution, with with a fixed PSF for M more iterations. Ideally the PSF would stop changing (converge) once it reached the true solution. Unconstrained this is not the case.

Thus people have looked into schemes where multiple PSF iterations, are applied for every Image iteration at the start of optimization (so the PSF converges faster at first and slower at the end).

https://ieeexplore.ieee.org/document/638806/

Such schemes are pretty complicated, and somewhat add-hoc. To evaluate them you need to…

  1. Output the first guess of the PSF to evaluate the effect of PSF model and aberration model, independent of Blind deconvolution.

  2. Output PSF and Image at each iteration (or atleast every few iterations) to evaluate co-convergence of the algorithm.

  3. Show it out-performs an optimized theoretical fixed PSF deconvolution, or has other benefit (for example, show you can get a reasonable result without optics parameters and theoretical first guess).

  4. Show it works on a wide variety of images, especially new images, that the algorithm has never seen before.

At least in Autoquant, you need a complete set of meta-data anyway, so you don’t bypass having to provide that. My own philosophy is that I don’t want to use an algorithm, until I see proof of benefit. So for now, I have not bothered to use blind deconvolution for any of my projects. If someone provided solid evidence of cases where blind deconvolution outperforms optimized theoretical/measured PSF deconvolution, and detailed information on the behavior of the algorithm, and showed it worked over a wide variety of images (especially new images, that were “unseen” by the algorithm) then I’d consider using it .


#10

Thank you for the thoughtful responses @bnorthan – I’ll be on the lookout for some evidence of that (and now I’m extra curious about how our partner got it to work well).


#11

Hi @eric-czech

I just noticed this paper today by Jizhou Li (the author of the PSF generation algorithm that ops wraps) https://www.researchgate.net/publication/324744194_Accurate_3D_PSF_estimation_from_a_wide-field_microscopy_image. Have you seen it already?? Seems like a promising approach to Blind Deconvolution, as it is estimating parameters instead of a PSF image.

I have only had a chance to do a “fly-by” and haven’t absorbed the details. However it appears he’s compared several methods of “Blind” deconvolution. Note that his method is only estimating a handful of parameters, as opposed to Blind RL (Autoquant is a constrained version of Blind RL).

Maybe there is a way for the three of us to coordinate some testing on a larger image collection. We could organize the testing in such a way that you can answer some of your other questions on optimal PSF size and such. I’ll reach out to Jizhou on github… maybe he’ll join the forum. It seems like he is doing some really cool deconvolution work, and has so far made it accessible from a number of environments (matlab, python, imagej, icy).