Measuring object lengths while avoiding human error

AVG_10min_SCALED.tif (984.5 KB)

(Image 2 - line drawn through a “defect” aka “object of interest” to then create a graph of its grayscale value)
grayscale_line

(Image 3 - graph of the grayscale value of the red line above)

Hello! I am trying to measure the length of specific features in this 3D printed sample. The goal is to characterize how well my imaging system can measure small defects (the circle that I am referencing is simulating a defect in a part). I’m doing this by taking a grayscale plot of a line (using ImageJ) through an object of interest (see pictures 2 and 3).

The challenge I am running in to is, how can I do this in a programmatic manner that removes human bias? For example, I could easily draw a circle in ImageJ that overlays the defect and then calculate its diameter. This seems unacceptable to me because I am making the decision as to where the “edge” of the object is based on visual intuition, rather than using some objective method.

This line of thinking lead me to my current conundrum. I have hard data that I can manipulate in excel (or wherever) corresponding to the grayscale values of a line that I draw through the defect (image 3). This seems like a better approach, but I run into the same issue - where do you decide where the “edge” of the defect is, based on this plot? It seems like I must arbitrarily decide that at some certain point along that line, the slope is satisfactory to signify an edge. How do I decide what slope to use, given that there is some inherent “noise” in the image, which results in multiple various slope changes?

Hopefully that makes some amount of sense. Thanks a lot and I will be looking forward to replying to your responses - I sincerely appreciate your help.

Use an intensity threshold to define the boundary.

@Austin_ARC Thank you for the reply. Are you referring to something like this?

I had done something along these lines before, but still felt that there was some arbitrary nature to it, seeing how I still have to technically choose a threshold limit. Furthermore, working with binary colors removes my ability to see subsurface defects, which is what is simulated by the section at the top of my original image (the alternating dark and slightly-less-dark areas are subsurface holes).

Maybe a solution would be to use thresholding for surface defects, and another method for subsurface defects? I would appreciate any further input. Thanks a lot.

Problem is no matter how it’s approached at some point an edge must be defined. You may consider filtration to smooth the boundary but ultimately one has to be derived. For an automatic option you might look at using some % higher or lower than the mean of the image or another calculated result. Hope this helps!

3 Likes

It’s a bit easier to see with a longer line, but you may be able to use the median or the mode values along the line to determine the background level against which you start measuring the edge of an object. With a size threshold, you remove small variations in the background that are not sufficient size to be a defect.

The image you took is fairly noisy, so I despeckled a couple of times to get:


Exporting the data and taking the mode in Excel got me:
image
Which looks fairly accurate from the plot and the numbers in the spreadsheet. How you calculate the background to generate your threshold, as mentioned, is still a decision you have to make. And how useful something like this is depends on how noisy you images are, and how consistent your background is. It works well here since there is a very regular background, and the shot noise can be suppressed.

And to tack one last thing on, somewhat it depends on how the image was taken. Anything taken slightly out of focus would be slightly larger than expected if the threshold is set to exactly the background. The physics of how you are generating your image might come into play too.

1 Like

Thank you very much for the response @Research_Associate. So let me write out what you’re saying just to make sure I understand:

  1. Take a grayscale measurement of the feature and export to excel
  2. Find the mode of the values to determine your average background.
  3. Pull one distance value after the last 192 (in your example) value before the dip.
  4. Pull another distance value at the next 192 value.
  5. Subtract the two to find your defect length. (I am trying to figure out a way to automate this process in Python or MATLAB somehow.)

I see what you are saying about choosing a threshold value based on the acquisition method. This image was actually taken via neutron radiography, and with an imperfectly focused lens. The object is mostly likely slightly over-represented in the image due to materials interaction and secondary neutron scatter, so maybe I can play with the threshold value and find one that matches my measurement to a known dimension.

Thanks again for your input, please let me know if I misinterpreted anything.

1 Like

Well, I suspect you can find the mode or median or something within FIJI, especially if you want to script the whole thing. I only sent it out to Excel because I knew it would be quick and manual was fine for one sample :slight_smile:
If you are looking for defects, you might be able to use the mode of your whole image, so you only have to calculate it once. That is, if you can assume defects aren’t the norm for any of your samples.

I would be careful about how you do 3 and 4, but basically. I figure I’ll mention that if you expect a 192, you might be disappointed by the results. You’ll want the last point at which the line crosses 192, and then the next time it crosses 192. If you will always have a global minimum, you might use the global minimum and search either direction for the first value of 192 or greater, and then take the “previous” value.