Ilastik autocontext stage2 probability output & range incorrect in headless mode


I have trained an autocontext workflow in Ilastik that properly outputs stage1 and stage2 probabilities on Windows (run through GUI), but when I run the same .ilp file in headless mode on a Linux server the stage1 probabilities are correct and the stage2 probabilities look strange and have odd ranges.

I have 5 classes in both stage1 and stage2 for a total of 10 output channels. When I examine the Linux .h5 output file and print the channel index, max and min per channel, I find:

0 max, min:
1.0 0.0
1 max, min:
1.0 0.0
2 max, min:
1.0 0.0
3 max, min:
1.0 0.0
4 max, min:
1.0 0.0
5 max, min:
0.0010306347 9.972058e-05
6 max, min:
0.073952764 0.0068365945
7 max, min:
0.17684516 0.016430637
8 max, min:
0.21699436 0.020667035
9 max, min:
0.9559661 0.53117704

Where 5-9 are the stage2 channels.

My command line is: --headless  --export_source="probabilities all stages" --readonly True  --output_format="hdf5" --pipeline_result_drange="(0,1)" --export_drange="(0,1)" --output_filename_format={dataset_dir}/server_all_prob_hdf5/{nickname}.h5 --project=/<path_to_data>/kidney_he_x2p5_2stage_peb10841.ilp /<path_to_data>/*.tif

I made the model in Ilastik 1.3.3_post3 on Windows and am running it in 1.3.3_post3 on the server as well. I’ll see if selecting only “probabilities stage2” as a workaround avoids the issue but it would be nice to be able to get both stage1 and stage2 probability output working. Thanks!

Hi @scook,

welcome to the community!

that looks indeed odd and needs further investigation. Would it be possible for you to share your project / data?

Cheers and thank you for the report!

Thanks @k-dominik!

I only notice this issue in two very large autocontext projects (~1GB .ilp files). I tried to replicate it in a very small minimum working example to share, but the issue does not appear. Assuming you are willing to handle such a large file, how would you recommend I send it to you? If the project file is too big but just the output files are enough to be of use, the .h5 output is about 100MB.


Hmm, before we dive into exchanging large files, it will already be helpful to see the logs of the batch run on the server. Would you mind sharing this instead (the log file should be rather small compared to your data. Per default on linux (with ilastik 1.3.3…) the log is saved in /home/<yourusername>/ilastik_log.txt. You could upload it to this filedrop (active until 20.12.2021).


Hi Dominik,

I’ve uploaded the ilastik_log.txt file from one run of the program on one file, where I confirmed it had the weird 2nd stage probability output issue. When you get a chance to look at it, please let me know how you’d like to proceed.

I may not be able to respond until the beginning of January. In that case, cheers and happy new year!


Hi @scook,

thank you very much for providing the file. In the logs I can see that at least one of the classifiers has not been saved in a trained state. This will trigger ilastik to retrain in headless mode - and here might be the caveat. It will not be able to access the data on your linux server (I assume) and will train without appropriate input data.

I have to confirm that there is not a bug somewhere that causes this in any case (not only in the case where the classifier was not saved).

Will not get to this until the end of the year though…


Hi @k-dominik,

Thanks for looking into it. I had a few observations that I hope can be useful:

at least one of the classifiers has not been saved in a trained state. This will trigger ilastik to retrain in headless mode - and here might be the caveat.

I don’t know if it’s related to this, but while running the 2-stage workflow I noticed that ilastik would perform a lengthly retraining step after processing every ~2-3 images, which I found confusing. Is that expected behavior? It happened on both Windows and Linux systems, but only the Windows output was correct. When running through the GUI on Windows, would it retrain in non-headless mode vs. the Linux server?


Hi @scook,

retraining is absolutely not expected and we definitely need to look into that. With the information you’ve given it should be possible to reproduce this behavior. I’ll give an update once I’ve reproduced it successfully.


Hey @scook,

thank you again for making us aware and providing information to reproduce the issue. We are currently working on a fix and will have a new release soon that includes it!

Happy to help @k-dominik, thanks to you and the team for Ilastik!

Hi @scook,

we’ve release 1.4.0b12 which should resolve this issue!