Trainable Weka Segmentation 3D - Bulk import Labels/Class from ROI Manager?

Hi all,

I’m talking about Training a Trainable Weka Segmentation 3D Classifier here.

I have a lack of understanding about how the label classes are analysed in the Weka Segmentation 3D. I get how it works in 2D. It isn’t made clear here though:

Is the model generated from inspecting a ton of classes/labels on your stack as independent 2D slices? Or are labels that are overlapping between consecutive slices seen as a “Volume” – and the model generated to recognise similar volumes?

Some features in my volumes are tubular, some are solid thick vessels, others are small spherical-like features, some are tear-drop like. Any of these can look similar in 2D depending on how they’re cutting through the plane.

At the moment, I’m assuming that the Trainable Weka Segmentation 3D tool understands overlapping same-class labels as 3D objects.
So using the selection tool, I’m hand drawing in ROIs, and using the ROI Manager to interpolate ROIs between the slices that I manually draw.

An issue is, that to apply the ROIs back from ROI manager into the Trainable Weka Segmentation 3D window, I have to select the slice in the manager, which loads the ROI into the Weka window, and then Click “Add To Class” for that selection. Then, I click the next slice in the ROI manager, and “Add To Class” again, ad nauseum. I did this over 180 times yesterday for a few different features prior to training.

Is there any way to bulk send the selections from the ROI manager to the Trainable Weka Segmentation v3.2.31 window as a singular class for all the respective Z slices in the Weka segmentation window?

I’d like to train the model with other volumes, but (a) my assumption about how Class labels are used in the Weka 3D model could be wrong, and (b) what I’m currently doing is a lot of work.

cheers

Hi,

Apologies in advance if I misunderstood your question but
from reading your post I get the impression that you are assuming that Weka trainable segmentation is classifying objects.
However, at least in the basic mode you are annotating and classifying pixels (in 2D) or voxels (in 3D).
The features taken into account are features of the local 2D or 3D neighbourhood around the pixel or voxel, respectively. If your objects have very different textures you may get the impression that you are doing object classifcication, but you are assigning class probabilities to individual pixels.

Based on these pixel class probabilities you can arrive at a segmentation and then label connected components (these are your objects). By measuring region properties of your connected components you can then classify them based on their shape. I know that ilastik has such a workflow (pixel classification followed by connected components and object classification) built-in. I’m not an expert on Weka, so I’m not sure whether such a workflow is built in to weka or you need to resort using other Fiji plugins.

This might be a good question for @iarganda to answer. However, I will attempt to add to/reiterate VolkerH1’s answer which, according to my understanding of how Trainable Weka Segmentation works, covered the basic idea pretty well. What the plugin does is apply a series of filters/operations to your stack (producing the feature stacks). The 2D version applies 2D filters and the 3D version applied 3D filters. That is the only difference. Once the feature stack has been generated, it takes the pixel/voxel values for every pixel you have selected using the ROIs. It then stores this information essentially in a format which can be thought of as a table were each row is a pixel and each column is for the value of that pixel in a given feature map (including the original gray-scale and your labels). From this data, it generates a bunch of histograms and attempts to learn the weights to use in order to classify input on a pixel-by-pixel basis. So in short, no; it does not know the shape/volume associated with the different classes. In fact, it doesn’t even know where each pixel is.

As for mass moving ROIs from the ROI Manger to the plugin GUI, this is something I also have to do. I wrote myself a script to automate iterating through a series of various training stacks (each one generating a new classifier). Below is the function I wrote to mass move the ROIs. Hopefully you will find it useful.

function clearROIManager(path) {
    count = roiManager("count");
    if (count != 0) {
        if (path == "choose"
            && getBoolean("Existing ROIs will be deleted."
                          +"\nWould you like to save them?")) {
            roiManager("save", "");
        } else if (save_path != "no save") {
            roiManager("save", save_path);
        }
        roiManager("deselect");
        roiManager("delete");
    }
}

function addTraces(path, class, save_path) {
    roim_open = isOpen("ROI Manager");
    if (roim_open) {
        clearROIManager(save_path);
    } else {
        run("ROI Manager...");
    }
    roiManager("open", path);
    num_rois = roiManager("count");
    slice_num = 0;
    for (i = 0; i < num_rois; i++) {
        roiManager("select", i);
        slice_num = getSliceNumber();
        call("trainableSegmentation.Weka_Segmentation.addTrace", class, slice_num);
    }
    selectWindow("ROI Manager");
    run("Close");
    if (roim_open) {
        run("ROI Manager...");
        if (save_path != "no save" && save_path != "choose") {
            roiManager("open", save_path);
        }
    }
}

Thanks Andrew and Volker H1. I think that answers my question.
I’ve looked at Ilastik, suggested by Volker H1, and that looks the goods. It is most definately working in 3D, and segmentation is very easy. I think I’ll use Ilastik instead.

Thanks for the ROI manager>Weka functions Andrew, they will come in particularly useful in the future. Much appreciated.

best regards.