How i select the training features for image,on which priority base ?
Welcome to the Forum!
So… can you elaborate a bit? We need some more context/details… Are you referring to training features of the machine learning tool Trainable Weka Segmentation? Or tools/links for learning more about ImageJ/Fiji? If this… here are some helpful links for getting started:
- ImageJ wiki - the best place to learn everything about ImageJ/Fiji!!
- “Introduction to Fiji” workshop and corresponding slides- worth the time to get a solid intro
- Principles page - collection of principles for the entire image analysis process, from acquisition to processing to analysis
- Segmentation page
- “Segmentation in Fiji” workshop and corresponding slides
- Trainable Weka Segmentation (TWS) plugin - a great tool for segmentation that comes directly with Fiji. NOTE: Fiji is Just ImageJ - it is simply a distribution of ImageJ that comes with a bunch of plugins bundled - ready for you to use out-of-the-box. If you are just getting started, we recommend downloading/using Fiji.
Hope this helps a bit! Just post again with more information if you can…
eta
Yes, i m talk about Trainable Weka Segmentation’s training features.
I am using TWS for hand face blob images kindly tell which training feature i’ll use for my work.
Thanks.
Would you be able to post an original image? That way we can take a look at your datasets… but for sure you should read up on the different features on the TWS page to start…
eta
This type if image i m using for image segmentation
suggest me which training features i’ll use .
The best place to start is just testing out different combinations of features based on those available. For these types of images - I am not sure myself. What classes of objects do you need to determine? Face versus background? etc. @iarganda would obviously have better insight on this…
It would be amazing to have TWS turn on every feature at the offset - to then determine which of the features are contributing most to effective separation. Then one can scale back to these necessary features to save on processing speed/efficiency.
eta
While I like the idea, @etadobson, people who are used to the minimum set of features might run into trouble. When training on a large image stack, I’d much rather start with a minimum set of features and get a result than to start with all features selected and run out of memory. Does that make sense?