Weka Segmentation Training Features Description

fiji
imagej
plugin
segmentation

#1

I have been testing out the Weka plugin for FIJI on some data, and I realized that I don’t have a solid grasp on what each of the training features it offers is really useful for. The documentation is very thorough in explaining what each training feature does to segment… but to those of us that do not live and breathe image processing it’s a lot to attempt to digest.

Does anyone have any resources or explanations on what the advantages of each training feature is? I think a more watered down explanation of each training feature would make Weka more approachable for newer users.

  1. Gaussian Blur
  2. Hessian
  3. Membrane projections
  4. Mean
  5. Maximum
  6. Anisotropic Diffusion
  7. Lipschitz
  8. Gabor
  9. Laplacian
  10. Entropy
  11. Sobel Filter
  12. Difference of Gaussians
  13. Variance
  14. Minimum
  15. Median
  16. Bilateral
  17. Kuwahara
  18. Derivatives
  19. Structure
  20. Neighbors

#2

Good day!

[…] but to those of us that do not live and breathe image processing it’s a lot to attempt to digest.

Mmmh, to understand classifiers, and WEKA per se is nothing else, you need not know anything about image processing.

Image classification is simply a special case of signal classification where the signals are 2D or perhaps 3D.

Consequently, you have to learn something about classifiers and how they work and this is a heavy job! WEKA-classification of images may be better understood with this background. To understand what features are used, the cited documentation appears widely to the point:

It is important to note that

WEKA […] produce pixel-based segmentations

and that

WEKA […] creates a stack of images—one image for each feature. For instance, if only Gaussian blur is selected as a feature, the classifier will be trained on the original image and some blurred versions

I would start trying to understand what’s the meaning of both sentences.

Good luck

Herbie


#3

I agree that the learning curve is steep for non-specialists.

If you haven’t seen it already, check out section 3 of the TWS User Manual, which is found in the supplementary data of the TWS paper (doi:10.1093/bioinformatics/btx180).

It helpfully sorts the training features into categories: “edge detectors, texture descriptors, noise reducers or membrane detectors” and gives visual examples of each filter. It also gives a few general guidelines on how to make your initial feature selections.

Hope this helps.


#4

This exactly what I was looking for. Thank you so much!.


#5

As a quick answer for some:

-1. Gaussian blur: removes high frequencies, i.e. “noise” (what you consider noise may be someone else’s signal).
-2. Hessian: directionality of objects. It’s the second derivative. The Tubeness filter plugin is based on Hessian too, and for each pixel it computes how likely it is to be in a tubular structure.
-8. Gabor: for barriers, or membranes, or lines. Look up wikipedia: https://en.wikipedia.org/wiki/Gabor_filter
-11. Sobel filter: for edge detection. In some way similar to Gabor, also useful for cross-sectioned membranes or lines of any kind. See https://en.wikipedia.org/wiki/Sobel_operator
-12. Difference of Gaussians: I use it all the time for detecting cells or round/spherical objects. See https://www.ini.uzh.ch/~acardona/fiji-tutorial/#find-cells-with-DoG

In summary, what each filter brings in is a particular way of throwing away data, projecting your original data onto a particular dimension that happens to matter for separating objects. The machine learning classifier then learns that pixels with certain values for the chosen filters belong to one class, with other values to some other, etc.

Hope this helps.


#6

Good day Albert,

yes, this was “a quick answer” and as such it was to a certain point …

However, and I suppose you know about the important details behind the scene (and therefore I’m actually not writing this post to/for you), things aren’t as easy when it comes to the classification of such features. Many of the implemented features aren’t fully linear independent and as such have only a limited impact on the classification result. In this respect the non-linear operators are clearly superior.

What I wanted to point out in my previous post is that you cannot see things in isolation, here: feature generation and classification. While image filtering in a processing context is a rather straightforward affair, it is not, if it is meant for classification. Furthermore, I wanted to make clear that the way TWS is implemented, it does pixel classification only. This means in the first place that it doesn’t use refined shape features. For a novice, both facts are of utmost importance and they need to be understood and especially the first isn’t possible without insights into classification per se.

As I’ve written several times already on this Forum:
Automatic classification is a mighty tool but difficult to use in an optimum fashion. Quite often it is not used correctly and it may either not deliver the expected results (a minor problem) or it may deliver results that are misleading (dangerous).

As with all more complicated tools, automatic classification should only be used by those who know exactly how to handle the tools and who know what they are doing. WEKA is an attempt to make automatic classification seemingly easy and it actually does for those of use who know about the details …
Thanks to the WEKA-team!

Best

Herbie


#7

Very helpful. Thanks so much!


#8

Herbie. I absolutely agree that understanding how tools work under the hood is critical to understanding how to effectively use it. Ultimately, that’s my goal with WEKA and many other handy FIJI plugins. However, especially with the way a lot of users enter the bioimaging community these days from such diverse backgrounds, it’s really important to have information that’s accessible to a range of disciplines and how to effectively use them.

My question was to provide a mid to high-level explaination for training features available in WEKA to offer some preface before diving in. We are fortunate in this scenario that WEKA is very well documented and fairly accessible (to my background), but that’s not always going to be the case for people without extensive exposure to high-level mathematics. And limiting the tool to only those who truly know how to use it and understand it’s inner workings kind of defeats the purpose of opensource, right? WEKA makes this stuff easy, accessible, and suprisingly repeatable (imo). That’s super powerful for the imaging community at large.

that said, every processing and analysis tool used by anyone should always be used with a healthy dose of skepticism and supervision. And in many applications, good enough can save a lot of time and pain than ‘not at all’. My hope is that anyone developing their own processing pipelines is well aware of that and have


#9

And limiting the tool to only those who truly know how to use it and understand it’s inner workings kind of defeats the purpose of opensource, right?

I can’t follow you here.
What has the understanding of a processing scheme and its use to do with open source, except that the open source code may help to understand the implementation (which isn’t necessary to understand the principles though)?

I’m not limiting anything but I think we should stick with good scientific practice which imples that you know how to properly use tools.

Have success

Herbie

PS:
Signal/image processing and mathematical statistics are topics of master studies and as such can hardly be understood by reading a Wiki or the like and can hardly be correctly applied by simply using even the best available computer programs.

What would you tell a graduate student in computer science if he claims that advanced knowledge in molecular biology must be accessible to him in the same (easy) way as you claim that machine learning should be for you?

Another a aspect:
You perform a computerized t-test on some data and you don’t realize that the data is distributed multimodally. You have to know that this is a no-no and that in this case you must use other statistical means to get an idea of statistical significance. Now you could claim that the computer program should tell you that a t-test is not allowed in such cases. In fact that’s possible and done. But now imagine how many cases there are with data processings schemes that are much more involved than a simple-test. The number of cases to test increase incredibly and that’s why coders will not be able to cover them all…

I hope this makes things a bit more transparent.


#10

Hey!

And limiting the tool to only those who truly know how to use it and understand it’s inner workings kind of defeats the purpose of opensource, right?

I think his point here is, that an open-source scheme has the advantage of making a program accessible to a wide range of users because you don’t have to pay for it. Therefore it would seem logical to also have documentation and knowledge bases available for everyone (especially laymans) so that these people are able to actually use the software correctly.

The next part might be a bit off-topic as I am a newcomer and just want to state my point:

I totally agree to you here, that image processing is a very advanced topic that could fill a whole study program on its own. Nontheless I think it is important to make the tools for image processing available and understandable to everyone who needs it.

I think that because I’m in exactly that situation right now. I come from a geoscience background and started a PhD that heavily relies on processing of SEM images of minerals. We learned non of this in our studies and it kind of feels like standing in front of an impenetrable wall now. There is so much to learn and so much to take into account when performing these analyses. At this point I am certain that non of my output will be scientifically useable.

I am eager to learn everything I can to correctly segment and measure my biominerals, but I always hope to find more guidance online so that I won’t need my whole PhD to figure out how to do it :slight_smile:


#11

and knowledge bases available for everyone (especially laymans) so that these people are able to actually use the software correctly.

Nontheless I think it is important to make the tools for image processing […] understandable to everyone who needs it.

Concerning classification, and this is the thread’s topic, I wish you the very best for your studies!

I fully agree that providing high quality free software is a great gift but providing documents that are comprehensible for the layman is near to impossible in certain cases, classification included, otherwise lengthy text books and university professors would be superfluous.

Finally don’t forget about the nice idea of division of labour, i.e. you may just hire a specialist who does the heavy lifting in an unfamiliar area for you.

Regards

Herbie


#12

Hey Herbie,

thanks for your reply!

Could you recommend some textbooks to quickly get someone with only minor knowledge about processing and classification up to date? That would be a great help


#13

Please note that signal/image processing and classification theory are quite different affairs, and that recommending books is difficult because people usually don’t prefer the same style of presentation and explanation.

Image Processing
Mainly because it is directly related to ImageJ and because it it easy to read I’d like to point you to:

Burger W. and Burge M.J. (2016, 2nd ed.)
Digital image processing: An algorithmic introduction using Java.
Springer, Berlin, New York, (811 pages).
ISBN: 978-1-4471-6683-2 (Hardcover), 978-1-4471-6684-9 (e-Book)

… and please have a look at it here:
https://imagingbook.com/books/english-edition-hardcover/

Classifiers and classification
This topic is much more difficult to treat, especially because the field has considerably frayed during the past about 15 years. Not that there are really much new developments of theoretical approaches but the terminology has changed and there was a great deal of development concerning practical aspects, i.e. implementations adapted to modern computer technology.

That having been said, I’d like to recommend a book about the theoretical basics that includes a unification of statistical and network approaches. The author was a leading person in pattern recognition and his group built the then most successful classifiers for character recognition.

Schürmann J. (1996)
Pattern classification. A unified view of statistical and neural approaches.
Wiley, New York/NY (392 pages).
ISBN 978-0-471-13534-0 (Hardcover)

I don’t know about a good book dealing with more “applied” aspects, but, for obvious reasons, I’d try searching for “WEKA”.

HTH

Herbie