I’m talking about Training a Trainable Weka Segmentation 3D Classifier here.
I have a lack of understanding about how the label classes are analysed in the Weka Segmentation 3D. I get how it works in 2D. It isn’t made clear here though:
Is the model generated from inspecting a ton of classes/labels on your stack as independent 2D slices? Or are labels that are overlapping between consecutive slices seen as a “Volume” – and the model generated to recognise similar volumes?
Some features in my volumes are tubular, some are solid thick vessels, others are small spherical-like features, some are tear-drop like. Any of these can look similar in 2D depending on how they’re cutting through the plane.
At the moment, I’m assuming that the Trainable Weka Segmentation 3D tool understands overlapping same-class labels as 3D objects.
So using the selection tool, I’m hand drawing in ROIs, and using the ROI Manager to interpolate ROIs between the slices that I manually draw.
An issue is, that to apply the ROIs back from ROI manager into the Trainable Weka Segmentation 3D window, I have to select the slice in the manager, which loads the ROI into the Weka window, and then Click “Add To Class” for that selection. Then, I click the next slice in the ROI manager, and “Add To Class” again, ad nauseum. I did this over 180 times yesterday for a few different features prior to training.
Is there any way to bulk send the selections from the ROI manager to the Trainable Weka Segmentation v3.2.31 window as a singular class for all the respective Z slices in the Weka segmentation window?
I’d like to train the model with other volumes, but (a) my assumption about how Class labels are used in the Weka 3D model could be wrong, and (b) what I’m currently doing is a lot of work.