I just want to add notes for how you could just use the existing ImageJ2 resources for what you are doing with DeepImageJ. So here is a quick run through how to use the existing ImageJ2 tools.
You need these dependencies - imagej-tensorflow
for library loading and tensor - image conversion, imagej-modelzoo
which can mostly do what CSBDeep
was always able to to (prediction of arbitrary image to image networks, tiling), but can handle multiple input and output nodes plus has the modelzoo specification API:
<dependency>
<groupId>net.imagej</groupId>
<artifactId>imagej-tensorflow</artifactId>
<version>1.1.6</version>
</dependency>
<dependency>
<groupId>net.imagej</groupId>
<artifactId>imagej-modelzoo</artifactId>
<version>0.6.0</version>
</dependency>
Regarding running them from an IJ1 Plugin, @ctrueden already explained how to get the Context
in IJ1.
So we do that or create a new Context
and get the services we need, in an IJ2 command we can use Parameter
annotations instead (example):
Context context = (Context) IJ.runPlugIn("org.scijava.Context", "");
if (context == null) context = new Context();
// get services which we want to use
TensorFlowService tf = context.service(TensorFlowService.class);
LogService log = context.service(LogService.class);
UIService ui = context.service(UIService.class);
OpService op = context.service(OpService.class);
ModelZooService modelZoo = context.service(ModelZooService.class);
ScriptService script = context.service(ScriptService.class);
// only needed for opening and saving via IJ2
// DatasetIOService datasetIO = context.service(DatasetIOService.class);
// DatasetService dataset = context.service(DatasetService.class);
This is how we can load the TensorFlow library - this snippet is not needed because it is executed automatically during prediction, but if you want to somehow deal with it differenly, you can:
// load TF library, show error if it failed
tf.loadLibrary();
if(!tf.getStatus().isLoaded()) {
log.error(tf.getStatus().getInfo());
}
Now we load the input image - I guess you want the IJ1 way, but for completeness I also added the code for IJ2:
// input paths
String imgPath = "/home/random/tmp/deepimagej/blobs.tif";
String outPath = "/home/random/tmp/deepimagej/output.tif";
// load image via IJ2
// Img input = datasetIO.open(imgPath);
// load image via IJ1
Img input = ImageJFunctions.wrap(IJ.openImage(imgPath));
ui.show("input", input);
We use the ModelZooService
to load the model archive:
// load model
String modelPath = "/home/random/tmp/deepimagej/model.bioimage.io.zip";
ModelZooArchive modelArchive = modelZoo.open(modelPath);
Here is how you can extract a macro from the archive and execute it via the ScriptService
- we are creating a copy of the input image to not modify the original image:
Img inputCopy = op.copy().img(input);
ui.show("input copy", inputCopy);
// macro preprocessing
File preprocessingMacro = modelArchive.extract("preprocessing.ijm");
script.run(preprocessingMacro, true);
Now we create a default imagej-modelzoo
prediction instance and also adjust e.g. the tiling options:
// setup prediction
DefaultSingleImagePrediction prediction = new DefaultSingleImagePrediction(context);
prediction.setInput("input", inputCopy, "XY");
prediction.setTrainedModel(modelPath);
// tiling options - if memory is insufficient, tiling will automatically increase, but it helps to start with higher tiling for bigger images
prediction.setNumberOfTiles(1);
Then we run the prediction and display the output:
prediction.run();
RandomAccessibleInterval output = prediction.getOutput();
ui.show("output", output);
Next, the postprocessing, in your case, is executed via macro, extracted from the model:
// macro postprocessing
File postprocessingMacro = modelArchive.extract("postprocessing.ijm");
script.run(postprocessingMacro, true);
And here again the two ways of storing the result - IJ1 and IJ2:
// save via IJ1
IJ.save(ImageJFunctions.wrap(output, "output"), outPath);
// save via IJ2
// datasetIO.save(dataset.create(output), outPath);
That’s it. Additionally, you can adjust the model specification via API, e.g. from your UI, here is an example, you can find a more extensive example here:
// adjust model
modelArchive.getSpecification().setName("my model");
modelArchive.getSpecification().setDescription("my model description");
modelZoo.save(modelArchive, modelPath);
The API is probably incomplete / not fully in sync with the official modelzoo specification and I would really appreciate issues / PRs in the imagej-modelzoo
repository to make sure our specifications match, it’s just a first draft which works well for DenoiSeg and N2V, but it would be great to work together on this.
The code could be shortened more if you would use an IJ2 command and I will create an example repository demonstrating custom prediction commands soon. I will also write more documentation about the ImageJ2 display that exists for the model, it just seems like this is not relevant since you want to use your own UI. The code above is hopefully sufficient to demonstrate how to make use of the existing ImageJ2 TF prediction / modelzoo specification backend resources. Please let me know if anything’s unclear / something I wrote is wrong / the code is causing issues.
EDIT Here is the data for the code above.