AVG_10min_SCALED.tif (984.5 KB)
(Image 2 - line drawn through a “defect” aka “object of interest” to then create a graph of its grayscale value)
(Image 3 - graph of the grayscale value of the red line above)
Hello! I am trying to measure the length of specific features in this 3D printed sample. The goal is to characterize how well my imaging system can measure small defects (the circle that I am referencing is simulating a defect in a part). I’m doing this by taking a grayscale plot of a line (using ImageJ) through an object of interest (see pictures 2 and 3).
The challenge I am running in to is, how can I do this in a programmatic manner that removes human bias? For example, I could easily draw a circle in ImageJ that overlays the defect and then calculate its diameter. This seems unacceptable to me because I am making the decision as to where the “edge” of the object is based on visual intuition, rather than using some objective method.
This line of thinking lead me to my current conundrum. I have hard data that I can manipulate in excel (or wherever) corresponding to the grayscale values of a line that I draw through the defect (image 3). This seems like a better approach, but I run into the same issue - where do you decide where the “edge” of the defect is, based on this plot? It seems like I must arbitrarily decide that at some certain point along that line, the slope is satisfactory to signify an edge. How do I decide what slope to use, given that there is some inherent “noise” in the image, which results in multiple various slope changes?
Hopefully that makes some amount of sense. Thanks a lot and I will be looking forward to replying to your responses - I sincerely appreciate your help.