Trainable Weka Error (large images)

fiji
weka
imagej
trainable-weka

#1

Hello everyone,

I tried to do some segmentation via the Trainable WEKA Segmentation plugin in Fiji. Therefore I loaded a classifier, clicked on “Apply classifier” and chose an image. I recorded this with the macro recorder:

selectWindow("203-001 - 2018-01-09 14.31.57_x20_z0_1.jpg");
run("Trainable Weka Segmentation");
selectWindow("Trainable Weka Segmentation v3.2.24");
call("trainableSegmentation.Weka_Segmentation.loadClassifier", "W:\\classifier003.model");
call("trainableSegmentation.Weka_Segmentation.applyClassifier", "W:\\Schnitte jpeg_gedreht - kleiner Stack", "203-001 - 2018-01-09 14.31.57_x20_z0_1.jpg", "showResults=true", "storeResults=false", "probabilityMaps=false", "");

The image I want the weka plugin to use on (actually there are several of them but I tried to do it one by one) are quite large (approximately 20k x 18,5k pixels). However, the segmentation ends after quite a while with an error on the log:

WARNING: core mtj jar files are not available as resources to this classloader (sun.misc.Launcher$AppClassLoader@764c12b6)
Exception in thread "Thread-14" java.lang.OutOfMemoryError: Java heap space
	weka.core.DenseInstance.copy(DenseInstance.java:145)
	weka.core.Instances.add(Instances.java:322)
	weka.core.Instances.copyInstances(Instances.java:2175)
	weka.core.Instances.<init>(Instances.java:237)
	trainableSegmentation.WekaSegmentation.applyClassifier(WekaSegmentation.java:5991)
	trainableSegmentation.WekaSegmentation$1ApplyClassifierThread.run(WekaSegmentation.java:4996)

	at weka.core.DenseInstance.copy(DenseInstance.java:145)
	at weka.core.Instances.add(Instances.java:322)
	at weka.core.Instances.copyInstances(Instances.java:2175)
	at weka.core.Instances.<init>(Instances.java:237)
	at trainableSegmentation.WekaSegmentation.applyClassifier(WekaSegmentation.java:5991)
	at trainableSegmentation.WekaSegmentation$1ApplyClassifierThread.run(WekaSegmentation.java:4996)
Exception in thread "Thread-13" java.lang.NullPointerException
	trainableSegmentation.WekaSegmentation.applyClassifier(WekaSegmentation.java:5054)
	trainableSegmentation.Weka_Segmentation$1ImageProcessingThread.run(Weka_Segmentation.java:1817)

	at trainableSegmentation.WekaSegmentation.applyClassifier(WekaSegmentation.java:5054)
	at trainableSegmentation.Weka_Segmentation$1ImageProcessingThread.run(Weka_Segmentation.java:1817)
Exception in thread "Thread-17" java.lang.OutOfMemoryError: Java heap space
	trainableSegmentation.FeatureStack.createInstance(FeatureStack.java:3155)
	trainableSegmentation.FeatureStack.createInstances(FeatureStack.java:2184)
	trainableSegmentation.WekaSegmentation$1ApplyClassifierThread.run(WekaSegmentation.java:4992)

	at trainableSegmentation.FeatureStack.createInstance(FeatureStack.java:3155)
	at trainableSegmentation.FeatureStack.createInstances(FeatureStack.java:2184)
	at trainableSegmentation.WekaSegmentation$1ApplyClassifierThread.run(WekaSegmentation.java:4992)
Exception in thread "Thread-16" java.lang.NullPointerException
	trainableSegmentation.WekaSegmentation.applyClassifier(WekaSegmentation.java:5054)
	trainableSegmentation.Weka_Segmentation$1ImageProcessingThread.run(Weka_Segmentation.java:1817)

	at trainableSegmentation.WekaSegmentation.applyClassifier(WekaSegmentation.java:5054)
	at trainableSegmentation.Weka_Segmentation$1ImageProcessingThread.run(Weka_Segmentation.java:1817)

To me it sounds like my memory is not large enough (I work on a cluster with 512 GB RAM(!)). Is there any way to get this working?

Many thanks in advance!

Max


#2

Dear @MaxAC,

It definitely is a memory issue. Are you sure you are running Fiji with full access to your 512GB of RAM? Have a look at this page in the wiki to see how to increase that number.

Another option consists on dividing your large image into tiles of “affordable” size, classify each tile separately and put them back together.


#3

Dear @iarganda,

thanks for your reply! I looked it up and Fiji did use ‘only’ 300 something GB. I will try it again now. In case it does not work out again, do you have a idea how I can automatically divide the picture in tiles and put it back together afterwards?

Thanks in advance!

Max

EDIT.: it worked out, thanks a lot!


#4

Dear @MaxAC,
I was trying to write a macro to do the image division and I ended up creating a new method in the plugin library to perform classification by tiling the input image. Please, update the plugin to the latest release and try the following Beanshell script:

// @File(label="Input directory", description="Select the directory with input images", style="directory") inputDir
// @File(label="Output directory", description="Select the output directory", style="directory") outputDir
// @File(label="Weka model", description="Select the Weka model to apply") modelPath
// @String(label="Result mode",choices={"Labels","Probabilities"}) resultMode
// @Integer(label="Number of tiles in X:", description="Number of image subdivision in the X direction", value=3) xTiles
// @Integer(label="Number of tiles in Y:", description="Number of image subdivision in the Y direction", value=3) yTiles
// @Integer(label="Number of tiles in Z (if 3D):", description="Number of image subdivision in the Z direction (ignored when using 2D images)", value=3) zTiles
 
import trainableSegmentation.WekaSegmentation;
import trainableSegmentation.utils.Utils;
import ij.io.FileSaver;
import ij.IJ;
import ij.ImagePlus;
  
// starting time
startTime = System.currentTimeMillis();
  
// caculate probabilities?
getProbs = resultMode.equals( "Probabilities" );
 
// create segmentator
segmentator = new WekaSegmentation( zTiles > 0 );
// load classifier
segmentator.loadClassifier( modelPath.getCanonicalPath() );
  
// get list of input images
listOfFiles = inputDir.listFiles();
for ( i = 0; i < listOfFiles.length; i++ )
{
    // process only files (do not go into sub-folders)
    if( listOfFiles[ i ].isFile() )
    {
        // try to read file as image
        image = IJ.openImage( listOfFiles[i].getCanonicalPath() );
        if( image != null )
        {
        	tilesPerDim = new int[ 2 ];
        	if( image.getNSlices() > 1 )
        	{
        		tilesPerDim = new int[ 3 ];
        		tilesPerDim[ 2 ] = zTiles;
        	}
        	tilesPerDim[ 0 ] = xTiles;
        	tilesPerDim[ 1 ] = yTiles;
        	
            // apply classifier and get results (0 indicates number of threads is auto-detected)
            result = segmentator.applyClassifier( image, tilesPerDim, 0, getProbs );

			if( !getProbs )
            	// assign same LUT as in GUI
            	result.setLut( Utils.getGoldenAngleLUT() );
             
            // save result as TIFF in output folder
            outputFileName = listOfFiles[ i ].getName().replaceFirst("[.][^.]+$", "") + ".tif";
            new FileSaver( result ).saveAsTiff( outputDir.getPath() + File.separator + outputFileName );
  
            // force garbage collection (important for large images)
            result = null; 
            image = null;
            System.gc();
        }
    }
}
// print elapsed time
estimatedTime = System.currentTimeMillis() - startTime;
IJ.log( "** Finished processing folder in " + estimatedTime + " ms **" );
System.gc();

You can define how many subdivisions (or tiles) to use per image dimension. Only the memory needed to process a tile will be use at each time, so it should be a bit slower than the regular method but much less memory consuming while obtaining basically the same result. Let me know if this works for you!