Ideas for edge detection of partial, noisy, low-contrast ellipse?

Hi all,

I have some images that have some hard to see edges within them that I am trying to detect. The edge is a semi-elliptical interface that shrinks rapidly across an image set as seen below (I will also attach the .tif at the end):

example_0222

The images were made by dividing a radiograph in the original data set set by the presceding radiograph in order to highlight differences, of which the moving interface is a part. This is what the image looks like before that difference:

example_0222_unprocessed

Ideally I would have a python program that will automatically detect the edges so that I can automatically get the coordinates of the curve for velocity calculation between frames.

I am using scikit-image in python, and so far I have tried Canny edge detection and a variety of filters (Sobel, Roberts, Scharr, Prewitt) with no real success, although with Canny I can see the edge among many other detected ‘edges’ when I use a sigma of 3, as seen below:

example_0222_sobel_sig3

Trying different values of the low- and high-threshold for the Canny edge detection has not helped.

Does anybody have nay ideas for some other techniques I can look explore to try to achieve some better results?

Thanks!

example_0222.tif (384.3 KB)
example_0222_unprocessed.tif (384.2 KB)
example_0221_unprocessed.tif (384.2 KB)

Hi
@cgusbecker
20191211_Circle-1

I do not know python, but I use macros.
If this result suits you,
then
maybe you can translate the macro into python.

img=getImageID();
run("Duplicate...", "title=1");
run("Duplicate...", "title=2");


//setTool("zoom");
run("Gaussian Blur...", "sigma=1");
run("Unsharp Mask...", "radius=25 mask=0.60");
run("Invert");

setAutoThreshold("Minimum dark");
//run("Threshold...");
run("Convert to Mask");
run("Dilate");
run("Analyze Particles...", "display add");

roiManager("Select", 0);
selectWindow("1");
run("Restore Selection");

roiManager("Set Fill Color", "blue");

If successful
send me the python please.

2 Likes

If you are allowed to manually set the AOI then maybe an analysis like what is outlined in this Jupyter notebook might work for you?

edge_detection

If you have a Google account you should be able to run it yourself using colab.

image3
image1
image2

For the best accuracy you’d probably want to do something like fit a 1D Gaussian across the detected edge.

-Hazen

1 Like

@Hazen_Babcock Thank you! This is extremely helpful.

Can you provide any references detailing some of the techniques you performed? I’m specifically interested in the normalization technique (subtracting the mean and dividing by the standard deviation, I have seen this technique used before, but I am not sure when it is appropriate) and also why you take the difference between the smoothed images and subtract the 5th percentile value, set all values below 0 equal to zero, divide by the 95th percentile value, set all values above 1 equal to 1, and finally subtract all of the values from 1.

Thanks again!

@cgusbecker No problem.

The thought behind the normalization is correct for any illumination differences between the two images, such as the lamp power fluctuating. In this case it probably isn’t actually necessary as the two images are very similar in intensity. When is it appropriate? I think it should be okay as long as the two images have a similar intensity, say within a factor of 2. If there are really large intensity differences (10x?) than this normalization may change the relative SNR of the two images, which could be problematic depending on what you are trying to do.

The 5/95 filter is supposed to remove any outlying pixel intensity values. Finally subtracting all the values from 1 was just to invert the image. This was done because skimage.morphology.label considers values of 0 to be the background.

Sorry, I don’t know any good references for this. I’m mostly just following what I’ve seen others do on similar problems. Unfortunately you’ll probably also find that this is tuned for these two images and it will take some more work to get it to correctly analyze your whole image stack.

1 Like

@Hazen_Babcock

That’s okay, explanation works great too! One last question, how come the 5th percentile is subtracted from the image while the image is divided by the 95th?

@cgusbecker

I was normalizing the image range to be 0.0 - 1.0, so subtract the 5th percentile value to set the zero point, then divide by the 95th percentile value for a maximum of 1.0 (Also threshold values that are outside the 0.0 - 1.0 range).