Interpret ilastik pixel classifier

Hi, all,

I used the pixel classification for fluorescence signal detection and it works great.
Now, I want to dig a little deeper to understand how (or why) it worked, not from the technical aspect, but from feature selection aspect, i.e. which features are used for the random forest classifier & what do they mean.

This is the result from Feature Selection with Filter Method, auto, penalty 0.05

Gaussian Smoothing (σ=5.0) in 2D
Gaussian Gradient Magnitude (σ=10.0) in 2D
Difference of Gaussians (σ=10.0) in 2D
Structure Tensor Eigenvalues (σ=0.7) in 2D [0]
Structure Tensor Eigenvalues (σ=3.5) in 2D [1]
Hessian of Gaussian Eigenvalues (σ=1.0) in 2D [1]

I have some troubles interpreting this result.
This is what I would say:
the Gaussian smoothing, Gaussian gradient magnitude and DoG contribute to the classifier on images blurred with large Gaussian kernel (σ ~ 10.0)
As for the rest of the parameters, they contribute more on images blurred with small Gaussian kernel (to detect finer structure ??)

I’m not someone with very solid math background, so correct me if I get things wrong.

Many thanks!


Hi @BioinfoTongLI,

nice to see you really getting into it. Using the feature selection is the way to go in order to reduce the number of features, to only include the important ones. This is done with the help of your training set (your added annotations/labels).

In First of all, I guess you’ve already looked at those features in the feature selection applet. This might help you to get a feeling for the respective features.

The features are already roughly grouped into “color/intensity”, “edge”, and “texture” categories.

So, for you it’s

Gaussian Smoothing (σ=5.0) in 2D -> color
Gaussian Gradient Magnitude (σ=10.0) in 2D -> edge
Difference of Gaussians (σ=10.0) in 2D -> edge
Structure Tensor Eigenvalues (σ=0.7) in 2D [0] -> texture
Structure Tensor Eigenvalues (σ=3.5) in 2D [1] -> texture
Hessian of Gaussian Eigenvalues (σ=1.0) in 2D [1] -> texture

Then, the σ of the gaussian smoothing kernel. Numerical values are in units pixel sizes here.

If you want to know more about the respective filters, you could check the vigra documentation e.g. here


1 Like

Thank you very much !
Your answer makes sense from the results I’ve got.:grin:

Just one more thing about the feature selection applet:
It takes only a subregion of my annotated image, right ? But why not the entire training set ?
To reduce computation time ? Can I manually choose which region to test on ?

The suggest features will always use all training labels (everything you have annotated) but only show you a preview that corresponds to the part of the data you were looking at when you clicked on “suggest features”.