Choosing training features in weka segmentation

Before I make a new classifier, could I get some short input on which training features to use? These are the ones I currently use, and I will use them to train the classifier on a 5 stack sample (with different z-projections, but the same imaging parameters). I know you explain it on the wiki, but it is slightly complicated for someone who just dabbles in image processing on the side.

Thanks for any advice you can offer.

(Or should I make a new thread?)

1 Like

I would use just a few of those features. You need to detect the soma by its texture and the dendrites by their shape, so I would select and edge detector such as Difference of Gaussians, a texture feature such as Median and the Membrane projections with a membrane thickness of approximately the thickness of your dendrites. That should be enough.


I just moved your post and @iarganda’s reply to this new thread, because I think this information is generally helpful and should be easy to find by others without crawling the rather long other thread.


Thanks, @imagejan, I need to learn how to do that :slight_smile:

I did some experimentation with a the features you suggested and a few more (neighbors, structure), but now I am experiencing a very strange bug. Running it through the plugin UI gives me a different result than running it in the script… this is all on the new version. I’ve tried saving and re-running it multiple times, it still gives a different (worse) result.

The right being the one from the plugin UI and the left from my script… (ignore the colors, I set them myself).

(Also, I don’t know if this is known, but saving a classifier takes forever, and there is no progress bar. You simply have to look at the file in the folder for when it stop increasing in size. Maybe this problem is on my end.)


The script again:

def imageprocessing():
	path = "\\\\\\sgr073\\Settings\\Desktop\\sgerre.lif"
	options = ImporterOptions()
	imps = BF.openImagePlus(options)

	for imp in imps:
   		project = ZProjector()
		impout = project.getProjection()
		projection = impout.getTitle()
	   	weka = WekaSegmentation()
	   	result = weka.getClassifiedImage()
	   	hist = result.getProcessor().getHistogram()
	   	lowth = Auto_Threshold.Shanbhag(hist)
	   	result.getProcessor().setThreshold(0, 0, ImageProcessor.NO_LUT_UPDATE)
		result.getProcessor().invert(), "Watershed Irregular Features", "erosion=20 convexity_treshold=0 separator_size=0-Infinity")
	   	roi =

That’s very strange. I have tried to reproduce the error but on my machine the result is the same pixel by pixel (except by the image type that is 8-bit for the GUI result and 32-bit for the script result).

Can you send me the model you are using?[quote=“Sverre, post:5, topic:2930”]
(Also, I don’t know if this is known, but saving a classifier takes forever, and there is no progress bar. You simply have to look at the file in the folder for when it stop increasing in size. Maybe this problem is on my end.)

Saving the classifier into a file should be very fast. There is no progress bar because it is saved all at once. Maybe you’re saving it on a remote hard drive?

Here is the Model I am using, which still gives me the different result than the one picture. I have no idea why this happened, but I guess I’ll have to start over. Or is there someway to open this model and keep training it? It took me some time to build, I want a really robust model that can handle many images of this type.

And yes, the saving is on a remote server, I’ll use a memory stick or something from now on. It took as much as 30 minutes to save… :stuck_out_tongue:

I used your model and got the same result from the script and the GUI:

Are you sure you trained the GUI in the exactly same image? Here is the one I’m using.

I just realized your “Classified image” has 4 slices instead of 1, why is that?

I trained the classifier on 4 images in a stack, is this redundant?

That’s fine as long as the image in the script is one of the images in the stack. Is this the case?

Can you send me the stack?

It is, here you go.

OK, so the problem is the image in the stack and the image in the script are slightly different:

Maybe you did some contrast enhancement in one of them?

I see, I didn’t think I had but I guess I must have. Thanks for the help Ignacio :slight_smile:

1 Like