A question about batch processing speed and what the best practice is.
I’m working on 4D stacks from 10 to 30GB. I’ve trained an ilastik pixel classifier on a 30GB .h5 file and exporting the probabilities took many hours. I’ve been trying to batch process other datasets, either by using the GUI or the Fiji plugging but both are taking hours to run. It seems like it’s also only using one CPU core, at 100%.
Is this normal? What’s the best/fastest way to batch process these datasets? Is using the headless mode any faster and are there ways to parallelize the process?
I’ve looked up the ilastik documentation on Controlling CPU and RAM resources and I was wondering if changing the LAZYFLOW_THREADS and LAZYFLOW_TOTAL_RAM_MB commands would help?
Thank you very much!