Need help in counting segmented areas

Hi, I have an image of textile and I want to count number of segmented areas inside the image. the image is attached. We as human can easily distinguish between these pixel-like textile, But how can we automatically count number of these areas? I have been struggling with ImageJ filters for a while and have tried many many filters, but no success. could anyone kindly help me?

The threads are actually very much like pixels, and as long as the image is taken correctly (good lighting, parallel to the object, etc), you should be able to use a pixel classifier like Weka or Ilastik to define the darker boundaries, as the boundary threads tend to be “darker.” The exception to that might be some of the areas that have a kind of light tan-green shading.

It would actually be easiest if you wanted a segmented area for every color type, as training the classifier would be easy. More complicated if you mean “flower” or “petal” or something that requires context.

1 Like

Dear Research_Associate,

Thanks for your interesting answer. As I am not familiar with those pixel-classifier tools, now I am very interested in learning about them.

considering your advice, I would like to go more specific on what I need and maybe you could kindly guide me more and I could solve my problem in easier steps:

1- Actually, I just need to count two lines of pixel-like segments, one horizontal line and one vertical line.

2- By counting I mean counting number of pixel-like segments exactly (which they are like small rectangles, each one of them having a specific color and around most of them a darker boundary is visible). I am free to choose only one type of color but here is another problem that how to automatically select a specific area that contains similar colors…

3- Counting only 20 to 25 segments on a single direction is enough to me because I am going to calculate “number of segments per length; in vertical and horizontal directions”. I don’t need to process the whole image.

4- considering your point: “training of pixel-classifiers would have an exception in some [light tan-green shaded] areas”, I have to add that I am free to choose which line to consider and count. there’s no limit on where to select, but as I said the process should be automatic.

5- I am able to take correct images with good lighting and camera parallel to the object.

If you have any other advice I would appreciate your help.

If you can orient the original image perfectly, you should be able to classify the entire image, or a set of images, fairly easily using a grid that aligns with the square objects you are interested in.
As an example, I took your image and dropped it into QuPath (I don’t use Weka or Ilastik very much, I just see those most commonly used for pixel classification problems and won’t be any further help other than the recommendation), where I created a 5 pixel grid.


This image wasn’t taken flat, so the tiles don’t really match up, but if the image was taken and cropped perfectly so that a set number of pixels existed per tile, and each tile was that number of pixels in size, you could create as many classes as you wanted for the various colors and then do whatever you wanted with the data later. It only took about a couple of minutes for this, but you could generate as many classes as you wanted for the various colors. I only chose 4 and a background, and let the classifier guess at the rest.

Script:

setImageType('BRIGHTFIELD_H_DAB');
setColorDeconvolutionStains('{"Name" : "H-DAB default", "Stain 1" : "Hematoxylin", "Values 1" : "0.65111 0.70119 0.29049 ", "Stain 2" : "DAB", "Values 2" : "0.26917 0.56824 0.77759 ", "Background" : " 255 255 255 "}');
createSelectAllObject(true);
runPlugin('qupath.lib.algorithms.TilerPlugin', '{"tileSizePx": 5.0,  "trimToROI": true,  "makeAnnotations": false,  "removeParentAnnotation": false}');
selectDetections();
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"downsample": 1.0,  "region": "ROI",  "tileSizePixels": 200.0,  "colorOD": true,  "colorStain1": true,  "colorStain2": true,  "colorStain3": true,  "colorRed": true,  "colorGreen": true,  "colorBlue": true,  "colorHue": false,  "colorSaturation": false,  "colorBrightness": false,  "doMean": true,  "doStdDev": false,  "doMinMax": false,  "doMedian": false,  "doHaralick": false,  "haralickDistance": 1,  "haralickBins": 32}');

The script, of course, doesn’t include the training areas that I used, or the classifier, as you would need to do that yourself. Hopefully Weka or Ilastik has similar options for your analysis. If you decide you want to use QuPath, there is plenty of further information on classifiers around the forums, with some quick links at the beginning of this post.
It would require almost perfect alignment for the initial picture, though.

1 Like

Research_Associate
Well done!!! I just can’t believe what you have done here in such a quick way and the result is amazing.

Many thanks for sharing the knowledge. I am going to stick to the very post of yours about QuPath. In the first look, the base image used for classification seems a little complicated for me since I am a newbie on image processing, but I am a hard worker and want to learn and implement the steps myself.

I will come back here to you in case of any question; I really appreciate your help…

I wouldn’t use the thread provided in that link so much as the links in the first paragraph to various other resources. The thread itself is more relevant to fluorescent biological image analysis.

1 Like

Thanks for pointing this out; I have downloaded the QuPath, so I use the links provided on the beginning of that thread to learn the software.

A basic question before learning the QuPath:
If I take a flat and proper image, then use QuPath, train it and classify the image (introduce a background color and a number of colors as foreground objects), would I be able to use the process for other similar images to be implemented automatically?

Since you have mentioned using a 5 pixel grid in case, I came up to think that maybe I have to guess the size of starting grid for each image separately and maybe using a trial-and-error method.

1 Like

You are correct to be concerned. Microscopes have a fixed pixel size, so this is usually not a problem for images generated by them. In your case, the images themselves would have to be VERY regular. I expect that unless you can find some kind of defined marker and arrangement of the camera position (exactly) and sample position (exactly), you will need to do some manual work at the beginning of each image. It is frequently well worth putting in the effort to get images of the quality needed for automation for large data sets. For small data sets, less so. So it depends on your project.

In addition, you may end up wanting to do some trimming in FIJI or some other image manipulation software (QuPath is only really for analysis) to make sure that the edges of your image line up with the edges of one of your real life thread pixels. Otherwise, every tile/square will be off-center. Not sure how easy all of that will be…

On the upside, the classifier itself can be saved and run on each image with the same color arrays. Different colors might require a new classifier. It all really depends on the whole project, and if you have images which will need different classification types, it might be best to start using a “project” early.

1 Like