How many 2D slices do I need to train classifier for 3D Trainable WEKA Segmentation?

fiji

#1

Hi everyone,

I have a stack of 2,000 images (2D slices) and I want to train a subvolume (stack) of this image so as to be able to segment the whole. However, I am not sure how many of slices would be ideal to obtain the best results.
Can someone please advice me?

Many thanks for your help.
Kerry


#2

Hello @kerry,

My general recommendation would be that you create a subvolume that is representative of the rest of the volume. In other words, it should have the same type of image content as in the other parts of the whole dataset.

Once you have a classifier you like for that subvolume, you can apply it to the entire volume using this script to save RAM memory while preserving the accuracy.


#3

Dear @iarganda,

Thank you very much for your response. I have looked this up, but could not understand how to create the tiles. Is there any video that would help me? Any advice or guidance would be greatly received.

Also, I have never used the macro before. Do I need to edit anything in the existing script? For instance: name of classifier, etc.?

Many thanks.
Kerry


#4

My bad, I know I should create more videos to explain all the options in the plugin.

Let me walk you through the process of using the script. It is very simple, you only need to introduce your own paths and parameters in the menu once the script is called. The steps are as follows:

  1. Open the script editor: File > New > Script…

  2. Copy/paste the script code in the Script Editor window:

  3. Save the script with the name that you want and extension .bsh (BeanShell format). For instance “TWS-process-folder-by-tiles.bsh”:

  4. Click on “Run” and a menu window will be displayed:

    image

    The parameters are as follows:

    • Input directory: folder with the images you want to process using your trained classifier.
    • Output directory: folder where you want to store the results.
    • Weka model: path to your trained classifier file (.model).
    • Result mode: “Labels” or “Probabilities” to get final segmentations or probability maps as output respectively.
    • Number of tiles in X: number of subdivisions in the X direction.
    • Number of tiles in Y: number of subdivisions in the Y direction.
    • Number of tiles in Z: number of subdivisions in the Z direction.
  5. Click on “OK” and the script will be launched.

That’s it. I hope everything is clear now. You should select the number of tiles based on your image size. In my machine I run some experiments using 95 features with tiles of about 128x128x128 voxels the RAM consumption remained under 5GB.


#5

Dear @iarganda,

I got it now. Thank you very much for this.

I have trained 4 labels in my classifier, but the segmentation did not work as the output file only had one label. Is there any particular reason/possible explanation for this?

My input stack is a 19882032100 image. I used x tiles = 1, y tiles = 1, and z tiles = 20.
PS: I tried unsuccessfully to upload my input, classifier, and output files for your perusal.

Many thanks.
Kerry


#6

That’s very strange. Did you use the 2D or the 3D version of the plugin? Maybe you mixed them up?

For a 3D image of size 1988 x 2032 x 100 I would try with x tiles = 10, y tiles = 10, and z tiles = 2. That way you would have tiles of size ~ 199 x 293 x 50, which are easy to handle.

Can’t you upload them to Dropbox, Google drive or a similar system?


#7

Dear iarganda,

I used the 3D version. I trained a subvolume.

Please see the following link for details: https://drive.google.com/open?id=1wa38g-GQmAC-pLkh0l83CsO9_0lk7sYY

Many thanks.
Kerry


#8

Hello @kerry

I have run the script using your input folder and the model called “classifier_35_slices.model” with x tiles = 10, y tiles = 10, and z tiles = 2, and the result was correct:

https://ehubox.ehu.eus/index.php/f/74499409

Did you try those parameters?

ignacio


#9

Dear @iarganda

Thank you very much for taking out your time to try this out. I think also that the classifier trained with 35 slices works. Nonetheless, I would want to know why the classifier with 5 slices fails. Could it be because the subvolume is so small?

Secondly, although i’m unable to view your results through the link:

I’m wondering why the green coloured label in my results seem not to be very accurate (i believe it will be same with yours)?

Many thanks.
Kerry


#10

Try with this link: https://ehubox.ehu.eus/s/j6ryWAg97TwZkBk

The result looks like this:

I assume that’s the other model file? Let me give it a try.