WEKA Segmentation Using Three Input Layers

Hi All,

We have three input image layers for the same geographic region and would like to classify the area using the information from all the three inputs. We created a TIFF file with three bands, each is 8-bit. This was treated as a RGB image in 2D WEKA segmentation and we noticed that, if we just select the “Mean” feature, the final feature list for each training pixel includes: original, H, S, B, Mean_1, Mean_2, …, where original=(R+G+B)/3, and Mean_1, Mean_2, … all based on (R+G+B)/3. We were hoping to have, instead, are, at least for the “Mean” features, Mean_R_1, Mean_G_1, Mean_B_1, Mean_R_2, Mean_G_2, …Is there a way to achieve this?

We did try to create a 3-slice TIFF file using ImageJ’s stacking tool, each slice is 32-bit. For a given training location/pixel, we had to add it to a class independently for each slice and it resulted in three samples/rows in the feature list table. Again, we are hoping to have the features to be derived for each input layers simultaneously for the same location/pixel. Ideally, clicking “Add to Class” once, resulting in one sample/row in the feature table. All suggestions are welcome.

Thanks in advance!

1 Like

Hello @Qiao and welcome to the ImageJ forum!

Unfortunately no. As it is now, the plugin only accepts grayscale or RGB images, not multi-channel ones. This could be something to add in the near future if more people find it useful, although it will be memory-expensive since it will involve multiplying the number of features by the number of channels.

1 Like

Thanks for your reply @iarganda. We do see that would be very beneficial to us as our three input channels are providing complementary information but the mean of the three channels does not always give us the classification features we are looking for. It has been one of the issues we are struggling with using WEKA segmentation in our current project.

OK, I will take it into account for future development!

1 Like

Thanks @iarganda. I am now testing on our data using the approach described in the Example: define your own features (https://imagej.net/Scripting_the_Trainable_Weka_Segmentation#Example:_define_your_own_features). It is working reasonably well.

Yes, that is good workaround right now :slight_smile:

Hi Ignacio,

First - I love what you’ve done with TWS and your other plugins. Thanks so much!

I happened upon this post after reading another on a similar subject. Multi-channel image support would be fantastic! Have you thought about this at all since your last post? I think such a feature would reach a wide audience and add a lot of value.

I have been converting some multi-channel data to RGB but I have more than three channels that are important. Effectively it’s an N-dimensional data cube and where each pixel in each slice is aligned, so converting RGB to HSL and treating H, S, and L as features doesn’t quite to the job relative to applying the filters to each slice and aggregating them for classification.

Thanks!

Hello @AWag,

I will definitely work on the multi-channel approach as soon as I can!

1 Like

I am trying to take the “define your own features” guide and use some of the features you have built in to FeatureStack using the, e.g. getGabor(), function. I’m new to beanshell. is there a way to do this on a specific ImagePlus? The goal is to both control the input ImagePlus as well as the returned ImagePlus rather than just using the “original” image.

Hi @AWag,

Sorry for the late responses. I did something simple. For each feature layer, I created a copy of one of the input image layers (an ImagePlus object). Below are some of the examples I used. Just for your reference.

//------------------------------------------------------------------
duplicator = new Duplicator();

ImagePlus ORI = new ImagePlus();
ORI = IJ.openImage(Image_File_Name);
if (ORI.getStackSize() > 1)
	new StackConverter(ORI).convertToGray32();
else
    ORI.setProcessor(ORI.getProcessor().convertToFloat());

ORIMean4 = duplicator.run(ORI);
IJ.run(ORIMean1, "Mean...", "radius=4");
// ...
ORIVar4 = duplicator.run(ORI);
IJ.run(ORIVar4, "Variance...", "radius=4");
// ...
structure1largest = duplicator.run(ORI);
IJ.run(structure1largest, "FeatureJ Structure", "largest smoothing=1.0 integration=3.0");
IJ.run("Close");
// ... 
Entropy_Filter filter = new Entropy_Filter();
ImageProcessor ip = ORI.getProcessor().duplicate();
entropy1_32=new ImagePlus( "entropy1_32",filter.getEntropy(ip, 1, 32)); 
// ... 

//------------------------------------------------------------------

Qiao

Thanks Qiao,

what about the other filters that are embedded in FeatureStack.java? it looks like you’re using filters from other packages here. I’m ending up going that same route too, but to keep things uniform with filters created in the UI I was hoping to use the filters in FeatureStack (e.g. getGabor() ) . I’m not sure if Beanshell can access these, though, since they are callable methods.

::edit:: I think the issue is related to generics and their (lack of) implementation in the current Beanshell release. See: https://github.com/beanshell/beanshell/issues/66

@iarganda do you have a suggestion on how these methods might be called via Beanshell? Thanks!

Dear @AWag,

Do you have an example script precisely for Gabor filters here. If you want to use directly the FeatureStack methods, you should do something like this:

import ij.IJ;
import trainableSegmentation.FeatureStack;

// get current image
img = IJ.getImage();

// create feature stack
fs = new FeatureStack( img );

// Sigma defining the size of the Gaussian envelope
sigma = 8.0;
// Aspect ratio of the Gaussian curves
gamma = 0.25;
// Phase
psi = Math.PI / 4.0 * 0;
// Frequency of the sinusoidal component
frequency = 3.0;
 
// Number of diferent orientation angles to use
nAngles = 5;

// add Gabor filtered image to feature stack
fs.addGabor( img, sigma, gamma, psi, frequency, nAngles );

// show feature stack
fs.show();

Hi @iarganda , just another request for enabling TrainableWeka to use multichannel images. The channels should themselves be very strong training features for semantic classification e.g. in multichannel histochemical or immunostained images. When combined with the filters, it would be very powerful. I guess that this could be done currently by defining the features for each channel independently and then combining them manually in Weka Explorer, but it would be much more efficient to have it all done within Fiji.

1 Like