How can I use the yacudecu deconvolution code in FIJI?

Hi all,
I am asking someone [esp bnorthan] to explain how to use the GPU-deconvolution code described in:

How would I to go about running this on my win10/64 system [nVidia CUDA installed]? I’ve gotten “invalid directory” messages trying to compile the .java code (using fiji) I found:

What needs to be done to make things work? If someone can walk me through it it would be greatly appreciated.

Hi @Vytas

This project is meant to be built using maven, and as far as I know Fiji is used to build simple java code, though I’ve never used Fiji to build java files.

Basically you need to download the entire code base and use maven to build. Often this is as simple as typing mvn in the command line. However this project is a bit more complicated because you need to build the cuda part using a c and cuda compiler. @hadim helped polish the process on linux but it is still a multi-step process to build this project in Windows.

The easiest thing is probably for me to build the project for Windows and make the artifacts publicly available on either dropbox or an update site. I’m happy to do this as it would be very helpful for me to have someone test this on Windows.

Quick question, do you know which version of Cuda your target machine(s) have?? I am currently building against Cuda 9.0, but can easily target a different version.

1 Like

This sounds great! IT would be a HUGE boost for deconvolution work.

We’re running CUDA 9.1 on most.

best regards

Hi @Vytas

I’ve put together a release of my Yacu Decu wrapper for Windows 10, CUDA 9.1. It’s a bit hackish, and will involve grabbing files from drop box. I haven’t made a polish update site yet because I haven’t really decided what to do about distributing all the different versions for different operating systems and versions of CUDA. I took a quick look at what other plugins that use CUDA are doing, and don’t really see a simple solution. For example, at least according to the current documentation, Care only has a Linux installation, and has separate versions for CUDA 8 and CUDA 9.

Anyway here are the instructions. Let me know right away if you have any issues or problems.

  1. The files you need are in dropbox here. It includes all the jars you need and a test image (a cropped version of the “bars” image.

  2. You may want to download and experiment with a “extra” copy of Fiji so you don’t mess up your original copy.

  3. You need to copy javacpp-1.3.2.jar to the directory and erase the older version of javacpp. Also copy ops-experiments-common-0.1.0-SNAPSHOT.jar and ops-experiments-cuda-0.1.0-SNAPSHOT.jar to your fiji jars directory.

  4. Now at the bottom of your plugins menu you should have a new ‘OpsExperiments’ entry, which has ‘YacuDecu Deconvolution’ and ‘YacuDecu theoretical PSF’, the first takes a image and psf as input, the second gives you option of selecting an image and entering parameters for a widefield PSF.


If anything goes wrong let me know. If you can provide sample images all be happy to debug with them on my end.

Dear Brian,

The code works extremely well and fast!

  I do see the output comes back as a stack of channels vs z slices (easily rearranged). I also see that trying to run it on a hyperstack throws an error:

java.lang.ArrayIndexOutOfBoundsException: 3

      at net.imglib2.img.planar.PlanarRandomAccess.setPosition(

      at net.imglib2.AbstractInterval.min(

      at net.imglib2.util.Util.getTypeFromInterval(

      at net.imagej.ops.copy.CopyRAI.initialize(

      at net.imagej.ops.DefaultOpMatchingService.singleMatch(

      at net.imagej.ops.DefaultOpMatchingService.findMatch(

      at net.imagej.ops.DefaultOpMatchingService.findMatch(

      at net.imagej.ops.OpEnvironment.module(


      at net.imagej.ops.copy.CopyNamespace.rai(

      at net.imagej.ops.experiments.filter.deconvolve.UnaryComputerYacuDecuNC.compute(

      at net.imagej.ops.experiments.filter.deconvolve.UnaryComputerYacuDecuNC.compute(






      at org.scijava.thread.DefaultThreadService$


      at java.util.concurrent.ThreadPoolExecutor.runWorker(

      at java.util.concurrent.ThreadPoolExecutor$


  so it isn't expecting the extra dimensions (also easily dealt with since your code seems scriptable in macros).

  I can't wait to try this on data that took over 30hrs to deconvolve using deconvlab2!

I can’t thank you enough.

1 Like

Sounds great.

The issues with the data coming back as a stack of channels, and not dealing with more than three dimensions could be handled with a more sophisticated ImageJ2 script that handles multi channel data properly and also sets the axis of the output data correctly.

Here is an example groovy script that uses the YacuDecuOp and the difraction kernel op to deconvolve an image. It could be made multi-channel by using the imglib2 functions to interpret the axis properly. If you have questions about that let me know.

Edit: Nov 13th - In the below groovy script the PSF is assigned an explicit size, the PSF energy is now normalized to 1 and ‘borderSize’ is no longer used. This means the image will be extended based on the PSF size, which is needed to avoid edge artifacts.

// @OpService ops
// @UIService ui
// @ImgPlus img
// @LogService log
// @Integer numIterations(value=100)
// @Float numericalAperture(value=1.4)
// @Float wavelength(value=700)
// @Float riImmersion(value=1.5)
// @Float riSample(value=1.4)
// @Float xySpacing(value=62.9)
// @Float zSpacing(value=160)
// @OUTPUT ImgPlus psf
// @OUTPUT ImgPlus deconvolved

import net.imglib2.FinalDimensions
import net.imglib2.type.numeric.real.FloatType;
import net.imagej.ops.experiments.filter.deconvolve.YacuDecuRichardsonLucyOp;
import net.imagej.ops.experiments.filter.deconvolve.UnaryComputerYacuDecuNC;
import net.imglib2.RandomAccessibleInterval;

// convert to float (TODO: make sure deconvolution op works on other types)

// psf size
psfSize=new FinalDimensions(32, 32, 100);


riImmersion = 1.5;
riSample = 1.4;
xySpacing = 62.9E-9;
zSpacing = 160E-9;
depth = 0;

// create theoretical PSF
psf = ops.create().kernelDiffraction(psfSize, numericalAperture, wavelength,
				riSample, riImmersion, xySpacing, zSpacing, depth, new FloatType());

// normalize PSF energy to 1
float sumPSF = ops.stats().sum(psf).getRealFloat();
FloatType val = new FloatType();
psf = ops.math().divide(psf, val);

startTime = System.currentTimeMillis();

powerOfTwo=false;, imgF, psf, null, null, null, null, powerOfTwo, numIterations, true);

endTime = System.currentTimeMillis();

print "Total execution time (Cuda) is: " + (endTime - startTime);

Dear Brian,

  I like the interactive scripting, but wanted automated operation on a series of stacks, so I hard-coded the input values and began running the groovy script inside a plain macro via run().  Works great! I can do 100iteration in <4sec when I didn't really try more than 10-15 using the ops RLTV approach since that would take >>10min using CPU. I recorded the running of the groovy script via text editor, and am not at all clear what the odd number after the @ is/means. (it does work). It thought it might be the imageID, but it is not. I've not come across that before and it doesn't search-up in forum or googling.

  run("yacudecu_tpsf_hardcoded", "img=net.imagej.ImgPlus@36095cdb");

  How would one pass arguments and perhaps a measured-PSF ID/name to the ops?  This all new to me.

  Our data typically has many hundreds of stacks to process, and it was taking over 30hrs to process using deconvlab2 macros. We can now do it better in about 2.

Hi @imagejan

@Vytas and myself are interested in getting the YacuDecu deconvolution working on a series of images. I briefly looked over your work on the SciJava Batch Processor and it looks like this is what we need.

FIrst question, what is the best way to learn about the batch-processor?? It looks like the documentation is a work in progress (totally understandable). That being said, I think I am missing something on how to get started. Do I need to add an update site?? Or is the functionality being shipped with the latest Fiji?? I was looking for either a menu item under plugins, or an extra button on the script editor but can’t quite find it.

Thanks for your help

Hi @bnorthan and @Vytas,

great to see some interest in the batch processor!

Currently, the best way is by asking here on the forum, I’m afraid… :worried: See below for some linked forum posts that might contain some useful hints for you.

No update site needed. The necessary jar file (batch-processor.jar) is shipped via the Java-8 update site.

In summary, batch processing is enabled on every module (i.e. script or plugin) that has a “batchable” input. Currently, only File parameters are batchable, but with the next update and deployment of imagej-plugins-batch, batch processing will also be possible for image inputs such as Img or Dataset.

  • For a compatible module, the search bar will display a Batch button in the search result.
  • The script editor’s Batch button will run any script containing a batchable input parameter
  • Lastly, there are the menu entries Process > Batch > Run Script from File and Process > Batch > Run Script from Menu, but these are likely to change in the future, since extensibility was added after these first menu commands were created.

The following posts might contain some useful information (sorry that it’s still scattered in forum posts, and not yet well documented on the wiki):

1 Like

Thanks a lot for the information. I’m making another version of the YacuDecu command which takes File inputs, instead of Img and I’ll spend time this week trying to get it to integrate with your batch processor.

1 Like

Hi @imagejan

Thanks for the links. I was able to follow them and get things “functional”. I search for my command, it finds it, then gives me the option of running as a batch. However I am having an issue when running the GPU deconvolution on multiple images through the batch option.

Are the batch operations run in parallel?? If so is there a way to force them to run in serial?? I am wondering if multiple threads are hitting the GPU at once and causing an error. If I run a “batch” job of 1, things work fine. If I run more then one the images immediately come back empty… making me think they all hit the GPU at once.

I could be wrong on this. If the individual batch operations are run serially let me know. Then I’ll to do more debugging and figure out what is going on.

No, they’re run serially, I was planning to improve this in the future:

The loop that currently does the processing is here:

and the actual call to ModuleService::run (and further on .get()) is just a few lines below:

1 Like

Thanks a lot for the quick reply @imagejan. The error was on my end, but studying your code helped me clue in right away to what I did.

I wrapped a command that took Dataset as an input, with a command that took File, however I needed to add a line to wait for the outputs to be generated.

Map<String, Object> outputs = instance.get().getOutputs();
1 Like

Glad you found the issue, @bnorthan!

With deployment of imagej-plugins-batch to the update site, this wrapping shouldn’t be necessary any more, as the batch plugin includes a BatchInputProvider that provides Dataset inputs from a list of Files.

If you want to try it now, you can put imagej-plugins-batch-0.1.0.jar in your jars folder (along with the latest version of batch-processor). I’d appreciate any feedback about possible issues (such as memory handling with large datasets, etc.), as I didn’t test it extensively yet.

Hi @imagejan

After adding imagej-plugins-batch I now have the option to perform batch processing on the original command as shown in the screen shot below. However after running the command no result images appear. Does the batch processor always just generate a new Img or Dataset for each result?? Or does it save results? If you needed to process many images, you would probably want the later behavior.

Running the batch (‘YacuDecu Deconvolution Batch’) version of the command still works fine. Result images are shown for each file processed. The only issue I found was that there is a second Dataset input (the PSF), and it simply defaults to using the active image window as the PSF, instead of asking me to choose it.


Thanks for your valuable feedback, @bnorthan!

All that the batch processor currently does is opening a File as a Dataset and providing this as an input to your command (without showing it in the UI).
It is (currently) up to your command/plugin to save anything if desired, as we only collect ItemIO.OUTPUT parameters that can be collected in a Table. I thought of implementing automatic saving of any outputs of type Img or Dataset, but I figured it wouldn’t be flexible enough for all the envisioned use cases, hence my plan to support dynamic output processing in a similar way to the current dynamic input providers:

That means, in the future you should be able to choose an output processor that offers the option to:

  • save image outputs into a specified output folder, with a specified naming pattern, or
  • display image outputs in an image window, or
  • add image outputs to a common BigDataViewer window, or
  • … (anything you can think of)

The displaying of images is due to the automatic post-processing of the SciJava framework and isn’t controlled by the batch processor. I don’t think this is always desired, and some more flexible way of controlling pre- and post-processing of parameters would be desirable here. See the following related issues:

This is caused by your PSF parameter being the only (remaining) Dataset input (after the input image has been populated by the batch processor), and therefore now being auto-filled by SciJava’s ActiveImagePlusPreprocessor. I’m not entirely happy with this default behavior of SciJava, but as hinted by @ctrueden in another issue comment:

Would this work for you?


Hi @bnorthan
you are doing a really nice work!
I have a little question (and somehow already an answer I think):
I guess that these jar files are compiled on purpose for Win10 (ie your instructions won’t work for Fiji on another systems MacOS for example).

best regards


Hi @GoranL

It compiles on Linux and Windows but I have not tried Mac yet. The jars I released for the purposes of this thread are windows only.

Do you know of a convenient way to setup a MacOS VM (As far as I know Amazon only supports windows and linux.)?? That way I could test a Mac release. Though this would be a longer term initiative.


I’ve heard about VMware being able to mount/run MacOS (InsanelyMac)
but I never tried this way (VM Mac on Win)
Do you have some kind of “idiot-proof” step-by-step guide to compile on mac?

EDIT: the guys from techviewer did a guide for that kind of VM VM MacOS on Windows
hope this helps

Hi @Vytas

I modified the groovy script so that it behaves like the command version, it no longer defines an explicit border size, which will force the low level code to just extend based on the PSF size. The PSF is assigned an explicit size (which you can change to experiment with a bigger PSF). You can find the updated script here. Let me know if this solves the potential border issues you mentioned.