Quantify holes in an image

Good day!

I’m trying to quantify the holes in an image, and despite having used ImageJ on and off for 5 years, I can’t get this seemingly extremely simple procedure to work. My dataset consists of binary images such as the one in the top left frame in the montage below, and what I want to do is to count the number and measure the areas of all the white sections of the image, but the Analyze Particles function includes the black areas in the quantification (see top right frame), which I don’t want it to do. I tried skeletonizing the images (mid left frame), but this resulted in no counts being recognized at all (mid right frame), and if I dilate the skeletonized image (bottom left frame), the counts are recognized, but again it hollows out the skeleton and quantifies that as well (bottom right frame).

I’ve tried inverting the images at every step, I’ve tried using ROIs to include/exclude segments in the count, but I just can’t seem to get this work. Any suggestions?

Hi @AllyoBayes,

you want to find bright objects on black background.
Therefore you have to make sure that your image has no ‘Inverted LUT’ and bright objects (holes).
And you should set the option Black background = ON in Process>Binary>Options…

Then used the Particle Analyzer.

Or run the following macro

run("Options...", "iterations=1 count=1 black");
run("Set Measurements...", "area integrated limit redirect=None decimal=3");

run("Analyze Particles...", "exclude include add");

on this test image


Use the Image>Adjust>Threshold tool to highlight the holes, then run the particle analyzer.

1 Like

Thank you Peter, that did the trick!

1 Like

I have a new question, if I may: please refer to the image below. Why does the analyze particles process count the area labeled as “5” as one huge particle instead of the complex meshwork of smaller particles that it actually is, and how do I get it to actually compute the smaller regions within that region?

So I spent the holidays trying to figure this out, without getting any wiser. For some reason I get it to work if I dilate or erode the image, or otherwise change the line widths, but that makes the whole endeavour pointless as I am trying to quantify the size of the “holes” or the empty areas between the lines. I’d appreciate any suggestions.

Here’s the original image, in case that helps:
mesh.tif (1.4 MB)

Dear @AllyoBayes,

The reason this happens is that the way to check how particles are connected involves using a “flood filling” tool which basically determines the adjacent nightbors of each pixel (4 direct neighbors or 8 neighbors, perpenducular and diagonal to the pixel in question).

Your large area, despite being cut by smaller chunks of lines, is actually one single continuous object, in terms of 4 or 8 connected pixel connectivity.

To mitigate this, your approach of using erosions and dilations is correct. You should combine them into a “Closing” operation (Dilate+erode with the same parameters for both) which conserves particle size but helps “close” particles that are close to each other. Thus you may end up with the “holes” you wanted.

This has the unfortunate side effect that smaller holes may be under-represented. This is often an issue in image analysis, as the workflows we create tend to be suitable for particles of a specific size, and may introduce bias when this size changes.

Despite this limitation, applying the same workflow to all your condition usually makes this bias systematic, allowing you to still draw conclusions on your data, while keeping the effect of the morphological operation in mind when interpreting the results.

Another approach to “quantify holes” is to look at more global metrics like the ones provided by BoneJ
like Fractal Dimension

Hope this helps



Excellent response - this is exactly what I needed to know. As far as the smaller holes go, this is perfectly fine as I am setting a lower limit on hole-size anyway. Thanks a lot!