Pixelclassification multichannel images

I have a question concerning the different pixel-classification tools, such as ilastik, Weka trainable segmentation, and CATS, when using multi-channel images.

Are the features computed per channel independently? Is the label used only for the channel where it has been created?

Sometimes I would like to combine several channels, sometime not. My impression is that Weka seems to use only one channel. Ilastik uses all channels by default.

Would be helpful if some of the developers (@Christian_Tischer, @ilastik_team, …) can give us some insight.



1 Like

Hi Toni, in CATS the features for each channel are computed separately and one can specify which channels to take into account.

@iarganda should know :slight_smile:

In ilastik, all channels are used and features are computed per channel. The labels refer to the spatial position, so for every labeled pixel, all selected features will be computer for all channels.

Thanks for the response. But is there a way to optionally disable this feature in ilastik. I guess the easy way is to load the channels separately.

Yes, I’m afraid you’ll have to throw the channels you don’t want out of the dataset. ilastik allows stacking across channel, so if you have each channel in an individual image, it should be easy to make a composite with just the ones you want.

@apoliti what is your reason for disregarding some datasets? The random forest should be able to disregard meaningless features on its own

1 Like

Dear @hanslovsky,
as example you have 2 may be not so good fluorescent markers that are for example inside the nucleus and outside the nucleus. The combination of both could give a good classifier for a nucleus.

The other example where one would like to deal with the channels independently is when the two markers are not related and their overlap changes over-time. If you train on the combined channels you create more confusion and need more training examples (different time points etc.).


Hi all,

in the commercial tool Intellesis (SW module in ZEN Blue and ZEN Core, 30 day free trial)

you have the option for both.

  • take a multi-channel (or color image) or an image created from a correlation of different imaging modalities
  • train model on a single channel = features will be only taken from this channel and will be used to classify a pixel XY
  • train model on all channels = features will be calculated for all channels and the feature vector will be concatenated before classification of the pixel XY

This is especially beneficial for color images, correlative microscopy images and applications, were the spectral information of a pixel XY is relevant to classify this one correctly

Sorry, the Trainable Weka Segmentation plugin/library does not support multi-channel images yet. It is on my TODO list!

1 Like

Hi!. I’m Looking forward to when it can be used for multi-spectral data e.g 8-band satellite.


1 Like

Hi, this is exactly the reason why in Intellesis one mode of operationis called “multispectral“.

Check it out yourself or post an image and i can give it a try.

You can do this with Bio7 and R. I’ve created several functions to transfer multiple Image selections (ROI’s in the ROI Manager) from all opened images or a stack to R (like an image selection cutter through all opened images or the whole stack). For each transfer you can select a signature. In R you can concatenate them for you classification purposes.

Here an older video:

However it is now possible to transfer selected ROI’s of the ROI Manager with it’s signature (class) instead of all ROI’s of the ROI Manager (like in ImageJ - no selection means all!).

Some simple R classification scripts can be found here:

By the way:
Just recently I released an update for Windows and Mac with two new scripts to combine matrices rows in R and to rename multiple selected ROI’s in the ROI Manager.

An ImageJ feature stack exported from the weka plugin can be imported in the ImageJ plugin of Bio7 for a transfer of selected features(layers).

There are some R packages which can compute additional features for combination like GLCM matrices:

And of course you can use, e.g., PCA to reduce the dimensionality of your feature classification stack.

@Sam_Davidson In the past I did a classification of Landsat images (with a PCA reduced feature stack). There is a Landsat package available, too:

But you can simply import the tiffs in ImageJ (georeferencing lost) and then classify the image with the selected feature stack. In R you can georeference the classification result for export, etc.

1 Like

Thanks to all of you for the useful answers and the new tools that were discussed.

Slightly off-topic, but I wrote some documentation on how to combine training data from multiple data sets within CATS: https://github.com/embl-cba/fiji-plugin-cats#training-a-classifier-on-multiple-data-sets. I thought I quickly bring it to your attention.


Late reply,
yes I used it and it works well. At the end it was quite intuitive on how to toggle on and off the channel and multiple data sets.

1 Like

Dear colleagues,

Can you share an update on the topic, please?
I also have a set of n RBG/HSV spatially registered raster images of the same object but using different instruments. Now I want to train a classifier on all n images.
Is CATS the only option for this?

Thank you.

QuPath also supports pixel classification:

Individually channels of a multichannel image can easily be toggled on/off for training or visualization.

1 Like

If you are willing to give the ZEN platfrom a try, than you can do:

  • import your images in ZEN Connect (using 3rd party package = BioFormats)
  • register and align images from different modalities in ZEN Connect (uf needed
  • define region and export as new multi-channel image containing all modalities as a “stack of channels”
  • train a pixel classifier in ZEN Intellesis which will extract features from all channels (= all modalities)



And here is another Pixel Classification GUI which uses ImageJ for feature creation and (at the moment) R scripts for classification:

Github source (with various R examples):