Ilastik object classifier - PCA analysis on the features tables

Hi Ilastik deveopler

I am working with the object classifier in a work flow that first does a pixel classification and uses those input into an object classification. It is two separate ilastik files.

Ilastik Object classifications feature table

I am write here in the hope that I can get a little bit of clarification on Object classification, and specifically a PCA analysis of them. We have a trained classifier that actually works quite well, and we was interesting in which features was actually important for the classification, and therefore turned to the feature export to do a bit of PCA analysis on those, to get an understanding of what is actually important for the classification, and that is what my questions evolve around. I am still newish to PCA, so forgive me if some of the questions are stupid or doesn’t make any sense.

The build-in PCA features

The features export includes a series of PCA component (called Principal component of the object_x with x = [0,3]), however when plotting those I get a very weird relationship. Ie. PCA_0 and PCA_1 is on a circle and PCA_1 and PCA_2 is equal (almost). Se images below, where the color represent different classes.
Ilastik PCA_0_1 Ilastik PCA_1_2

Trying my own PCA analysis using sklearn

So I tried running my own PCA analysis, using the python sklearn package, but it didn’t bring any revelation with it. See for instance the initial output below. I think one of the things I am struggeling with is the intensity histogram and neighboring intensity histogram. This takes up 128 out of my about 180 features, which I guess makes it easy for them to become dominant in the PCA analysis.
classification normalizatin
This is the result of the sklean PCA including a normalization of all the features (excluding position and labels) the first component account for 24% of the variance, which is not a lot when you look in textbook and online tutorials. but maybe that is as high as they get in a real life application?!

This brings me to my questions:

  1. I am not 100% about the algorithm behind the object classifier. But is it possible to have a good classifier and still see no pattern in a PCA?
  2. I experimented a bit with both doing the pca with and without a normalization (using sklearn.preprocessing.StandardScaler), I got a better PCA without normalized but got a bit of a weird range on my PCA. Is the values in the features export already normalized?
  3. Is it correctly understood that the intensity is used as 64 individual features used in the classification? Or are they combined into a smaller features space before being combined with the others?
  4. Or do I simple have too high expectation for a clear cut in a 2D PCA analysis?

Any help would be much appreciated.


Hey @JesD12,

nice post :slight_smile:

Maybe it’s important to know what the PCA in ilastik entails.
It’s not documented in the docs, unfortunately, one of those things really like to change but never find the time

So, the PCA in ilastik operates on the coordinates of the pixels/voxels of the objects, on object by object basis. The principle components are the eigenvectors, so the principle axes of these pixels. In 3D you can have 3 of those vectors, with 3 values each (so Principle components of the object_08), in 2D there are two axes with two values each, so Principle components of the object_03.\

For you it goes from 0..3 so I assume your are analysing 2D data.

The eigenvectors are normalized, so they have a length of 1. If you plot them for all your objects, you can expect them to lie on the unit circle (that you can see in your first image). Principle components of the object_0 is the x-value of the first eigenvector, Principle components of the object_1 is the y-value of the first eigenvector.

The second characteristics of the eigenvectors is that they are perpendicular to each other. So the first vector is perpendicular to the second. With unit vectors in 2D, a perpendicular vector is the original vector rotated by either 90^{\circ} or 270^{\circ} degrees. if your first vector has values (x, y) this will result in either (-y, x) or (y, -x). For your second plot this means that you are plotting either Principle components of the object_1, so y vs Principle component of the object_2, which is -y (short segment), or y for the long segment.

I guess the main difference to your own PCA is that our PCA operates on the coordinates of the pixels, whereas your PCA operates on many other features.

Sooo, does this clear it up?


Hi @k-dominik

Thank you so very much. That explains a lot. I might have gotten to focused on the Principle components part of the name, and too little on the “of the object” part.

Again thank you so much for taking the time to explain it.



1 Like

If you have object features (e.g. texture, shape…) and want to know which ones are important for classification, PCA is probably not the best way to go since the components are linear combinations of features so interpretation requires looking at the coefficients and there may not be a clear answer there and in addition a 2d plot may not adequately show the separation between classes. I found that a good approach is to look at feature importance output from tree-based classifiers such as random forest or XGBoost. I’ve just recently implemented this in the Image Data Explorer using the R package xgboost. This can also be done in python if you prefer to go down that road.

How do you deal with the problem that adding/removing features can totally screw with feature importance? (At least in my experience). So I would not really recommend using it to say that feature A is the most important one… Or are you using some systematic approach there?

This is typically an exploratory step. We’re often looking for some, preferably biologically-interpretable , features associated with the object classes. The idea is often not to do feature elimination to improve the classifier (although this can help reduce overfitting) but to identify features to help interpret the experiments or focus the next experiments.

thanks @jkh1 and @k-dominik or the extra information.
Maybe I can elaborate a little bit on what I was trying to do. It was more of an exploratory investigation of features, than it was to do a quantitative analysis on which parameter is most important.
I was aware that the classifier is not based on a PCA but on a decision tree, but I still thought it made a bit of sense to look where is the variation in the data.
I more or less had two questions in my mind for my work:

  1. My plan was simple to look at which features contribute most to the first first and second term of the PCA, to get a feeling for where the variation is in the data.
  2. I wanted to look what happend with the variation if I took away all parameter involving sizes, not necessary how it affected the classification, but just to see what happened to the varation of the data (very explorative, and I am not complete sure what I expect, other than the PCA will become worse since we now have less features.

We are somewhat looking to check that size is not an overdetermine factor in the classification, and I think already now we have an idea that it is not the case.

Thank you for sharing the Image-data-explore, I didn’t knew about it, I will check it out, it looks very interesting.