I am starting to learn how to use CLIJ, and I am super excited about it! I am using python and connecting to CLIJx via pyimagej, so I am using the Java-specific methods from CLIJx.
I am able to get a (16-bit) image onto my GPU successfully and do some operations on it. However, I would like to convert this image to 32-bit, ideally on the GPU to avoid data transfer. I see the documentation on the ImageJ macro extensions for doing this (
convertUInt8 etc), but I don’t think that there is a dedicated method accessible from the Java package to do this.
In general I have been struggling with interpolating between the reference instructions on the web, which are specific to the ImageJ macro language commands, and the actual methods available to me from the Java source code (which I think is what I am ultimately accessing via pyimagej). They are not the same, although there is some consistency in the nomenclature differences for the most part.
If I am using the java functions available in
net.haesleinhuepf.clijx.CLIJx, how would I convert data types on the GPU?
Minimal example (I think the error is pretty unambiguous, but just so you have a concrete idea what I’m doing)
import skimage.io as io import imagej #I am using my local Fiji, I just updated from the clij2 update site today. #Let me know if I should report any versioning for clij2, I'm not sure how! ij = imagej.init('/Applications/Fiji.app') from jnius import autoclass im = io.imread(path_to_some_image) CLIJx = autoclass('net.haesleinhuepf.clijx.CLIJx') clijx = CLIJx.getInstance() im_ij = ij.py.to_java(im) im_on_gpu = clijx.push(im_ij) im_float = clijx.create(im_on_gpu.getDimensions(), clijx.Float) #This is where the error happens: clijx.convertFloat(im_on_gpu, im_float) AttributeError: 'net.haesleinhuepf.clijx.CLIJx' object has no attribute 'convertFloat'
I have also tried
net.haesleinhuepf.clij2.CLIJ2 and even the CLIJ version of that as well, but none of them have an appropriate conversion method.
For now I can try to make it work doing the data type conversion prior to loading onto the GPU. Perhaps this is what I should be doing anyway?
Thanks very much!