Maximum z-projection using CLIJ2

Hi,

I tried to do a Max Z-projection using CLIJ2 in Fiji on a Virtual stack. It did not work and I got this error message

(Fiji Is Just) ImageJ 2.1.0/1.53g60; Java 1.8.0_172 [64-bit]; Windows 7 6.1; 13630MB of 73588MB (18%)
 
java.lang.NegativeArraySizeException
	at net.haesleinhuepf.clij.converters.implementations.RandomAccessibleIntervalToClearCLBufferConverter.copyRandomAccessibleIntervalToClearCLBuffer(RandomAccessibleIntervalToClearCLBufferConverter.java:72)
	at net.haesleinhuepf.clij.converters.implementations.RandomAccessibleIntervalToClearCLBufferConverter.convert(RandomAccessibleIntervalToClearCLBufferConverter.java:42)
	at net.haesleinhuepf.clij.converters.implementations.ImagePlusToClearCLBufferConverter.convertLegacy(ImagePlusToClearCLBufferConverter.java:204)
	at net.haesleinhuepf.clij.converters.implementations.ImagePlusToClearCLBufferConverter.convert(ImagePlusToClearCLBufferConverter.java:146)
	at net.haesleinhuepf.clij.converters.implementations.ImagePlusToClearCLBufferConverter.convert(ImagePlusToClearCLBufferConverter.java:24)
	at net.haesleinhuepf.clij.CLIJ.convert(CLIJ.java:475)
	at net.haesleinhuepf.clij.CLIJ.push(CLIJ.java:406)
	at net.haesleinhuepf.clij.macro.CLIJHandler.pushToGPU(CLIJHandler.java:267)
	at net.haesleinhuepf.clij.macro.AbstractCLIJPlugin.run(AbstractCLIJPlugin.java:415)
	at ij.plugin.filter.PlugInFilterRunner.processOneImage(PlugInFilterRunner.java:265)
	at ij.plugin.filter.PlugInFilterRunner.<init>(PlugInFilterRunner.java:114)
	at ij.IJ.runUserPlugIn(IJ.java:243)
	at ij.IJ.runPlugIn(IJ.java:204)
	at ij.Executer.runCommand(Executer.java:151)
	at ij.Executer.run(Executer.java:66)
	at java.lang.Thread.run(Thread.java:748)

At first I suspected it was a size issue (the image is around 6.4 GB) so I tried to run the Z-projection on the full image. It worked without a hitch.

So, my question is: Why does the Z-projection does not work in Virtual stacks? Is it because the full image needs to be loaded into FIJI in order to send it to the GPU?

Also, when I ran the Z-projection on the image (single channel, z-stack, time series) it loses the time dimension

I also tried the bounded Maximum Z-projection, and got the same result. Is the only work around to split all the time frames, run the CLIJ Maximum Z-projection and then concatenate them back?

Thank you for the attention.

Kind regards,
José Marques

Edit: GPU - GeForce GTX 970

Hi José @zemarques,

in order to make a maximum-z-projection in CLIJ, the whole stack is sent to the GPU. Thus, the whole stack must fit in GPU memory. You can find out the maximum size of images which fit in your GPU by doing this exercise and reading the line starting with “MaxMemoryAllocationSizeInBytes”.

Furthermore, I guess you know already, just in case: A single operation executed on the GPU may be slower than doing it on the CPU. GPU-acceleration makes a lot of sense for many steps executed in a continous workflow on the GPU. See more about that here:

In order to use CLIJ efficiently, it is necessary to execute a whole workflow consisting of multiple operations on single time points. Afterwards, the next time point can be processed. That’s why operations in CLIJ do not support timelapse and multi-channel data. If you would execute the first operation on all time-points and then the second operation on all time points, you would need to push and pull data between CPU and GPU memory all the time, which is very inefficient. See also slides 18-21 in this presentation: https://github.com/clEsperanto/i2k2020_tutorial_clij_clesperanto/raw/master/GPU_accelerated_image_processing.pdf

May I ask what other steps you have in your workflow? I could then point you to the right tutorial if you like.

Cheers,
Robert

2 Likes
MaxMemoryAllocationSizeInBytes: 1073741824 

So, about 1GB of allocation memory. It wouldn’t be able to process the whole image.

As I feared, the opening took more time to open the image than to process it. And the process was on CPU for the whole time series.
image

The rest of the workflow would be to track cells that divide. For that, I was thinking about using Mastodon.

The user even says that prefers to analyze everything else manually (even the tracking), so there’s not much more to the workflow

1 Like

Yes, then GPU-accelerated processing cannot help :wink:

If you come to the point where a workflow is slow, I’m happy to GPU-accelerate it with you. Just let me know!

Cheers,
Robert

1 Like