Maximum z-projection using CLIJ2


I tried to do a Max Z-projection using CLIJ2 in Fiji on a Virtual stack. It did not work and I got this error message

(Fiji Is Just) ImageJ 2.1.0/1.53g60; Java 1.8.0_172 [64-bit]; Windows 7 6.1; 13630MB of 73588MB (18%)
	at net.haesleinhuepf.clij.converters.implementations.RandomAccessibleIntervalToClearCLBufferConverter.copyRandomAccessibleIntervalToClearCLBuffer(
	at net.haesleinhuepf.clij.converters.implementations.RandomAccessibleIntervalToClearCLBufferConverter.convert(
	at net.haesleinhuepf.clij.converters.implementations.ImagePlusToClearCLBufferConverter.convertLegacy(
	at net.haesleinhuepf.clij.converters.implementations.ImagePlusToClearCLBufferConverter.convert(
	at net.haesleinhuepf.clij.converters.implementations.ImagePlusToClearCLBufferConverter.convert(
	at net.haesleinhuepf.clij.CLIJ.convert(
	at net.haesleinhuepf.clij.CLIJ.push(
	at net.haesleinhuepf.clij.macro.CLIJHandler.pushToGPU(
	at ij.plugin.filter.PlugInFilterRunner.processOneImage(
	at ij.plugin.filter.PlugInFilterRunner.<init>(
	at ij.IJ.runUserPlugIn(
	at ij.IJ.runPlugIn(
	at ij.Executer.runCommand(

At first I suspected it was a size issue (the image is around 6.4 GB) so I tried to run the Z-projection on the full image. It worked without a hitch.

So, my question is: Why does the Z-projection does not work in Virtual stacks? Is it because the full image needs to be loaded into FIJI in order to send it to the GPU?

Also, when I ran the Z-projection on the image (single channel, z-stack, time series) it loses the time dimension

I also tried the bounded Maximum Z-projection, and got the same result. Is the only work around to split all the time frames, run the CLIJ Maximum Z-projection and then concatenate them back?

Thank you for the attention.

Kind regards,
José Marques

Edit: GPU - GeForce GTX 970

Hi José @zemarques,

in order to make a maximum-z-projection in CLIJ, the whole stack is sent to the GPU. Thus, the whole stack must fit in GPU memory. You can find out the maximum size of images which fit in your GPU by doing this exercise and reading the line starting with “MaxMemoryAllocationSizeInBytes”.

Furthermore, I guess you know already, just in case: A single operation executed on the GPU may be slower than doing it on the CPU. GPU-acceleration makes a lot of sense for many steps executed in a continous workflow on the GPU. See more about that here:

In order to use CLIJ efficiently, it is necessary to execute a whole workflow consisting of multiple operations on single time points. Afterwards, the next time point can be processed. That’s why operations in CLIJ do not support timelapse and multi-channel data. If you would execute the first operation on all time-points and then the second operation on all time points, you would need to push and pull data between CPU and GPU memory all the time, which is very inefficient. See also slides 18-21 in this presentation:

May I ask what other steps you have in your workflow? I could then point you to the right tutorial if you like.


MaxMemoryAllocationSizeInBytes: 1073741824 

So, about 1GB of allocation memory. It wouldn’t be able to process the whole image.

As I feared, the opening took more time to open the image than to process it. And the process was on CPU for the whole time series.

The rest of the workflow would be to track cells that divide. For that, I was thinking about using Mastodon.

The user even says that prefers to analyze everything else manually (even the tracking), so there’s not much more to the workflow

1 Like

Yes, then GPU-accelerated processing cannot help :wink:

If you come to the point where a workflow is slow, I’m happy to GPU-accelerate it with you. Just let me know!


1 Like