Quantification of DAB stained area

Hello everyone.
I am trying to quantify DAB stained area on leaves by using trainable Weka segmentation.
I am now in a dilemma, whether i need to adjust my contrast and brightness(for easier segmentation) or not.
I have tried segmenting using original and modified image, and both of these give different result.
In order to get ground truth result, do i need to use original image or modified image?

Another problem i have faced is: classifier trained by using only one sample image is not enough as the other leaves have different intensity of stain. Once the classifier is trained, i could not train it again by using other sample image.
Any suggestion?
Thank you in advance!

Generally DAB is used to stain tissue that is relatively clear, so if you want a ground truth, you stain only with DAB. In your case, I don’t think you can get a ground truth based on the design of your experiment. The best you can do is keep the lighting consistent, and look for relative changes between experimental groups.

It might be a stretch, but if you had something “white” that you could also stain with DAB in the same image (a white dot in one of the corners), that might give you a decent reference, but I am not sure. Color deconvolution is intended for only two different colors, so I’m not sure how well that will work for the leaf and a blue background.

Ideally, you would use some other method to get a ground truth result (I don’t know, biologically, what you are measuring), and then use that to create training areas. Those training areas could be used for a pixel classifier (WEKA, Ilastik, etc). I am not sure what other methods you might have access to that would give you such information though.

A negative control would certainly help.

Hello!
The method that i am adopting now is referred to the article below.
Automated image analysis for quantification of reactive oxygen species in plant leaves
Briefly say, the author have used Weka to segment the image into 2 classes of pixel: with reaction and without reaction, and the outcome is a classifier which can then be used to predict other leave images.

To accurately extract leaf blade, they convert the format of the image from RGB to HSV ( to detect different stain intensity) and thresholded it through Otsu thresholding to obtain a mask.

Quantification of stained area was then quantified using the HSV format leave image on top of the mask.

By the way, the leave images of mine were scanned into computer through a multifunction printer. So, i believe the light condition within the environment in the printer is consistent?

“The classifier operation includes supervised training and prediction phases”
It sounds like they selected the pixels for the classifier. So there was no ground truth, just what they thought the result should be, and found similar to that.

Not sure about the printer.

*Based on the paper, it sounds like later versions of Weka should be trainable on a stack. So you would need to create an appropriate input stack. There are a few other posts on Weka on the forums that probably have more information.

Alright noted.I have created the stack and training on it now.

Previously, i have tried training on edited images (contrast and brightness adjusted) and original images. When used to predict respective images (classifier trained on edited images were used to predict other edited images), they both show great differences in the probability map.
Why is it so?
and what is your recommendation, to use original image or edited image?

I’m not sure there is a correct answer to the second. And the first depends on your training data, and the general amount of variation in your images versus the training set. When I create training sets, I try to take data from the most extreme examples that I could possibly want to identify, both on the high end, and even more importantly, usually, on the low end (faintest staining I want to consider positive).

Essentially, I think it is up to you to test your models and images, and determine which is the most accurate for what you are trying to measure. And then share enough data in any publication to justify that decision. These days sharing data is easy. Space is cheap, supplementary figures and data are common.

Would you be comfortable sharing all of your images, thresholds, and data publicly? If so, then you probably chose well.

*I suspect non-linear adjustments to the image are probably not a good idea? But I don’t know for sure what all adjustments are actually being made, and how exactly they might influence the data. Ideally, if you are not “washing out” your image (RGB pixels are not hitting 255 or 0 in 8/24bit), you should be mooostly safe. I think. Maybe others will have more informed suggestions.