Fiji: Batch Mode and Headless Mode together


I want to process images in the batch mode on a server, hence also in headless mode. I have tried Jython Scripting and Macro Language scripting.

However, when I am running the scripts/macros from command line, they are getting processed one by one. Can you give an example on how I can the script/macro in batch mode and headless mode?


PS: Couple of links I followed:

Says run ( "Batch process" , arg <)>

Then I tried running Jython script but doesn’t run in batch mode.

I think you need to elaborate the problem a bit that you are facing.
Batch Mode allows to process multiple input images one by one with the same script and with batch mode turned on in the script also without visualizing the images.

Headless just does this without GUI interaction.

So what do you want to achieve? Or what does not work?

I want to run the plugin OrientationJ on around 1000 images. As it will be running on a server, hence I need to use the headless mode.
Now, the process has to be made scalable, hence, I was using the batch mode so that it can process multiple images at the same time.

These are the following methods I tried:

  1. Using Jython scripting: I gave the input and the output directories as parameters and ran the script in the headless mode. However, it processed the images in a sequential order.
  2. Using Macro: I wrote a ijm script that took the input and output directories as parameters and set the batch mode as true. When I ran it in headless mode, it was processing the images in a sequential order.

I tried these things through GUI and choosing the Batch option for execution. However, it shows a output result table with the list of input filenames. It doesn’t process the images.

I am sorry for less informative before. I hope it is clear now.

No problem!

I now understand your problem. Batch in Fiji means that one can automate the processing and use the same operation over multiple files. It does so in sequence and not in parallel.

You just use a single instance of Fiji all the time. For doing things in parallel you need to have multiple instances of Fiji that processes different parts of your data. For this you need to have a layer on top of Fiji. Something that splits your data up in chunks and calls an instance of Fiji on these chunks.

Thus you need to look into other options here. Processing on a High Performance Computing cluster is such an option. You would interact with a queuing system and dispatch jobs in parallel. This requires existing infrastructure. But you can also run multiple instances of Fiji on a single processing station. The important thing is that you have some code that splits up the data and calls individual Fiji instances on them.

Here is a project that implemented using Fiji on a HPC cluster. This is using an additional tool called snakemake (, which is a workflow manager that does the creation and dispatch of the individual jobs for a complex workflow. This can be run in parallel also on a single processing station:

For less complex workflows you could also just create individual jobs with simple Bash scripts. This is described here:

There are also other projects addressing this: Remote HPC cluster parallelization support in SciJava plugins

KNIME could be also an option…

1 Like

Yeah. My understanding of the batch mode was not correct. Thanks for clarifying that.

Right now, I believe I will use the python wrapper to do that as I need to interact with S3 to fetch the images and then call multiple instances of Fiji.

Thanks for sharing information about using Fiji on HPC clusters. I will look into them :+1:

1 Like