I now understand your problem. Batch in Fiji means that one can automate the processing and use the same operation over multiple files. It does so in sequence and not in parallel.
You just use a single instance of Fiji all the time. For doing things in parallel you need to have multiple instances of Fiji that processes different parts of your data. For this you need to have a layer on top of Fiji. Something that splits your data up in chunks and calls an instance of Fiji on these chunks.
Thus you need to look into other options here. Processing on a High Performance Computing cluster is such an option. You would interact with a queuing system and dispatch jobs in parallel. This requires existing infrastructure. But you can also run multiple instances of Fiji on a single processing station. The important thing is that you have some code that splits up the data and calls individual Fiji instances on them.
Here is a project that implemented using Fiji on a HPC cluster. This is using an additional tool called snakemake (https://snakemake.readthedocs.io/en/stable/), which is a workflow manager that does the creation and dispatch of the individual jobs for a complex workflow. This can be run in parallel also on a single processing station: https://imagej.net/Automated_workflow_for_parallel_Multiview_Reconstruction
For less complex workflows you could also just create individual jobs with simple Bash scripts. This is described here: https://imagej.net/SPIM_Registration_on_cluster_(deprecated)#Original_SPIM_registration_pipeline
There are also other projects addressing this: Remote HPC cluster parallelization support in SciJava plugins
KNIME could be also an option…