Pixel Classification - object classification and lots of image stacks

Hi, I am fairly new to Ilastik and I have a bit of a fundamental question. Sorry if this has been asked before but I have not found anything in the Forum so far.
Anyway, I have lots of image stacks (brain slices) and I am trying to differentiate blood within blood vessles from mini hemorrhages. I first use the pixel classification to train 100 or so images for potential “hits” and then use the prediction images for the object classification to differntiate noise, blood vessles and hemorrhages. After some trial and error this seems to work quite well. (One question on the side, is there an advantge or disadvantage to differentiation objects into 3 categories rather than two. I am actually only after the hemorrhages).
My acutal question is different though: How do I best apply this result to all the other image stacks I have got?
I thought batch processing would be the choice but there I am missing data; i.e. it is expecting not only x,y,z (that is what my original images are) but also c and a 5th dimension). Also, I guess it needs a segmentation map that fits the original data. But is it right, that I train Ilastik on all data I have got? Surely there is a better way to go about this.
In advance thanks for your help.

Hi @Alex2,

first of all great that you found ilastik to be of use for you. And I guess I need some clarifications before I can answer your question, so just to make sure we are on the same page I am summarizing your setup/question:

  • you are using the combined pixel + object classification workflow, (or the two workflows separately, what would always be our recommendation) where you have added
  • 100 or so images that you use for training (which I would expect to be super slow - we usually advice not to add more than 10, or so, images)
  • In object classification you have three classes, but you are actually only interested in one class (hemorrhages). It shouldn’t make a difference given that you provide enough training samples. However, it might be easier to train with just two classes (because you don’t have to care to separate the combined background category).

To your actual question:

Once you have trained a classifier, we recommend to go either to the batch processing applet, or, in your case it might be more viable to to headless processing (note that the docs there will be updated in the coming days, since they are fairly out of date).

I have to admit that I don’t fully understand the training on all images question. You might have to elaborate on this a little bit more. It shouln’t be necessary to train on all your datasets if their appearance is comparable.

The question about the 5 dimensions… Did you get some kind of error message?


Hi Dominik,
thanks for the answer. Sorry, I should have been a bit more precise in my original question.
What I have is image stacks from about 30 mouse brains. The final goal is to compare the number and size of the hemorrhages in the different animals. For each mouse I have a z stack of about 350 brain slices, the images even in 8bit and converting them to HDF5 are quite large (a whole dataset is about 2GB)

I read in the forum on a different question, that you recommend to use the separate workflows, so I initially do the pixel classification on 100 or so images and export the probabilities. Then I use my original 100 images and the probabilities as an input to the object classification.

Well, you are right, it is very slow with that many images. I first used 10 for the pixel classification, but then I cannot use 3D - as the sigma 3.5, and 5.0 for the Gauss distribution prevent that. The other reason to use the large dataset is, that when I go for the Object Classification Workflow and input Raw Data and the Pixel Prediction map I get an error saying that it cannot use given properties for the dataset.
(Raw data and other data must have equal dimensions (different channels are okay).
Your datasets have shapes: (1, 2160, 2560, 100, 1) and (1, 2160, 2560, 21, 1))
Sorry, I cannot paste te screenshot at the moment.
Anyway, I solved this by inputting a prediction map that corresponds to the dataset, meaning I have to use a lot more images to create the prediction map in the first place.
However, I very much doubt that this is the right workflow, as then I would have to do the same thing for all 30 image stacks, each having the 350 or so images…

I will write another answer later on on the Batch mode as I don’t seem to be able to reproduce the exact same error right now. It said something along the lines that my input data do not have the same dimensions as it was expecting.

Sorry, this may be a complete beginner‘s question, but at this point I am stuck and I don‘t seem to find a good solution for the problem.


Hey Alex,

first of all, really cool, that you looked around in the forum already and that you are not using the combined workflow :slight_smile:

maybe let’s get some terminology straight, because I am not sure I understand some of the details you have mentioned.

So far I gather your data is in the form of 3D stacks, that you have converted to hdf5 (which is actually really the right thing to do for usage in ilastik). So for me each of these hdf5 files constitute one 3D image or volume.
Furthermore, you have worked on different subsets of those stacks right, so you created different volumes with different depths in z-direction with the aim to speed up processing.

Quick side note, and I am not saying you should exploit this feature for your work, but since ilastik 1.3.2 you can selectively switch the filters to be computed in 2D for certain sigmas (see our docs).

However, I still think it should be possible to work on the whole images (the 2GB ones, at least for pixel classification). One thing you can do to speed it up - and I have to admit this setting is more than hidden - but, if you are working with 3D data, you should change the tile-size. In Pixel classification you can access this value in the Training and the Feature Selection applet via the view menu. So select View - Set Tile Witdth… and set it to 256. It’s a global setting so you only have to do it once. If you ever feel like processing large 2D images, switching it to 512 would make sense.

Furthermore, with the size of your images, you should not zoom out too much while in live-update mode.

Okay, switching to object classification now.
Raw data and Prediciton map have to have the same size in all dimensions.

While Pixel Classification can be applied to arbitrarily large volumes, Object Classification is currently limited by the RAM of your machine. So you’ll just have to try, whether it will work with one of your complete volumes (2160x2160x350) will work. If not, you are not on lost ground. You would first want to train the classifier interactively on an image of the size your machine can handle. For this you’d have to cut out a part from both your raw data and the prediction map. Once you have a trained classifier, we have this blockwise Object Classification that internally will work on overlapping blocks of the whole image. Right now the documentation for Object Classification is a bit outdated, we’re in the process of updating it. So If you have to go the blockwise way, then you can always ask for more details here.

Hope that helps

Thanks a lot, I will try that.
Best regards,