Elliptical Hough Transform doesn't work on own images

Hi everyone,

I am currently trying to use elliptical hough transforms with the end goal of using it to outline droplets for my final university project. I am using code based of this:

However, it only seems to work with some images in the skimage data library (not all) and not at all with my own basic image, such as:

droplet

Is anybody else having issues using own images and if so have you found a solution?

Thanks for the help and happy new year!
Tom

Hi Tom, welcome to the forum! You should take the magnitude of the gradient of your image before computing its Hough transform (for example by transforming first your image with a Canny filter as in the example you linked to). Did you use a gradient filter before computing the Hough transform? If you already did, could you please share a reproducible snippet of code so that we can see where the problem is?

Hi Emmanuelle,

Thank you so much for the response!
Here is the code that I used, and the output result of the image above. It seems to work on the straight edge but not on the elliptical one.134436456_116827580206190_6801207212639329959_n|613x500


Happy new year
Tom

hough_ellipse has many parameters and is therefore a bit hard to use, but in your case, fitting with a circle instead of an ellipse gives good results

from skimage import io, draw, filters, feature, transform, img_as_ubyte
import plotly.express as px
img = img_as_ubyte(io.imread('droplet.png', as_gray=True))
px.imshow(img)
edges = feature.canny(img, sigma=2, low_threshold=0.65, high_threshold=0.8)
px.imshow(edges)
import numpy as np
radii = np.arange(100, 150, 2)
result = transform.hough_circle(edges, radii)
accums, cx, cy, radii = transform.hough_circle_peaks(result, radii,
                                           total_num_peaks=2)
from skimage import color, draw
image = color.gray2rgb(img)
for center_y, center_x, radius in zip(cy, cx, radii):
    circy, circx = draw.circle_perimeter(center_y, center_x, radius,
                                    shape=image.shape)
    image[circy, circx] = (220, 20, 20)
px.imshow(image)

As for ellipses, it is true that I could not make it work with hough_ellipse and your image… I will try to investigate a bit more.

2 Likes

Wow, thanks so much for the help! Will try using this with some other images I’ve got instead of hough_ellipse.
If you get any further successes using the ellipse that would be amazing!
Thanks again

Hi!

I too have difficulties to get the hough-transformation for ellipses running, when using my own image. However, the sample code in the link provided by the previous poster runs fine when I use the provided coin-image.

The canny-image of my own image looks ok to me:

There are 3 things I wonder about:

  1. I am uncertain about the line, that opens the coffee-image:
    image = img_as_ubyte(data.coins()[160:230, 70:270])
    What do the numbers in the brackets mean? I couldn’t find an explaination so far. So this might mean something is wrong with the format of the image, preventing proper continuation.

  2. In the tutorail, there is a short explaination of the parameters of the Hough-transforation:

# The accuracy corresponds to the bin size of a major axis.
# The value is chosen in order to get a single high accumulator.
# The threshold eliminates low accumulators

Could you please elaborate on this? It is difficult to estimate good parameters for my own image, if their meaning is not clear.

  1. When I run my code (see below), processing doesn’t seem to stop. I let it run for an hour or so, and it uses some CPU. When I abort (ctrl+c), I get the
(kirsch) C:\Users\Jost\Documents\RnD\rePho\filter_kirsch>python "new 1.py"
new 1.py:31: FutureWarning: The behavior of rgb2gray will change in scikit-image 0.19. Currently, rgb2gray allows 2D grayscale image to be passed as inputs and leaves them unmodified as outputs. Starting from version 0.19, 2D arrays will be
 treated as 1D images with 3 channels.
  image_gray = color.rgb2gray(image_rgb)
Traceback (most recent call last):
  File "new 1.py", line 44, in <module>
    result = hough_ellipse(edges, accuracy=10, threshold=50, min_size=10, max_size=20)
  File "C:\Users\Jost\anaconda3\envs\kirsch\lib\site-packages\skimage\transform\hough_transform.py", line 165, in hough_
ellipse
    min_size=min_size, max_size=max_size)
  File "skimage\transform\_hough_transform.pyx", line 216, in skimage.transform._hough_transform._hough_ellipse
  File "<__array_function__ internals>", line 2, in amax
KeyboardInterrupt

Does that help to track down the issue?

Find below my code. Thank you very much for helping!

import matplotlib.pyplot as plt

from skimage import data, color, img_as_ubyte
from skimage.feature import canny
from skimage.transform import hough_ellipse
from skimage.draw import ellipse_perimeter
from skimage import img_as_ubyte
import os, os.path
from pathlib import Path
from os import listdir
from os.path import isfile, join
import glob
import cv2 as cv2
import numpy as np


# Load picture, convert to grayscale and detect edges

path = glob.glob("C:/Users/Jost/Documents/RnD/rePho/filter_kirsch/temp/*.jpg")
cv2_img = []
for img in path:      
    fg = cv2.imread(img)
    fg_rgb = cv2.cvtColor(fg, cv2.COLOR_BGR2RGB)
    gray = cv2.cvtColor(fg_rgb, cv2.COLOR_RGB2GRAY)   
    image_rgb = cv2.normalize(src=gray, dst=None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)    
    head_tail = os.path.split(img) 
    
    #image_rgb = img_n[0:220, 160:420]
    #info = np.iinfo(image_rgb.dtype) # Get the information of the incoming image type
    
    image_gray = color.rgb2gray(image_rgb)
    
    edges = canny(image_gray, sigma=2.0, low_threshold=10, high_threshold=20)
    edges = img_as_ubyte(edges)
    
    edges = cv2.resize(edges, (982,1472))
    cv2.imshow('Image', edges)    
    cv2.waitKey(0)                   # waits until a key is pressed
    cv2.destroyAllWindows()          # destroys the window showing image
    
# Perform a Hough Transform
    # The accuracy corresponds to the bin size of a major axis.
    # The value is chosen in order to get a single high accumulator.
    # The threshold eliminates low accumulators
    result = hough_ellipse(edges, accuracy=10, threshold=50, min_size=10, max_size=20)
    result.sort(order='accumulator')

    # Estimated parameters for the ellipse
    best = list(result[-1])
    yc, xc, a, b = [int(round(x)) for x in best[1:5]]
    orientation = best[5]

    # Draw the ellipse on the original image
    cy, cx = ellipse_perimeter(yc, xc, a, b, orientation)
    image_rgb[cy, cx] = (0, 0, 255)
    # Draw the edge (white) and the resulting ellipse (red)
    edges = color.gray2rgb(img_as_ubyte(edges))
    edges[cy, cx] = (250, 0, 0)

    fig2, (ax1, ax2) = plt.subplots(ncols=2, nrows=1, figsize=(8, 4), sharex=True, sharey=True)

    ax1.set_title('Original picture')
    ax1.imshow(image_rgb)

    ax2.set_title('Edge (white) and result (red)')
    ax2.imshow(edges)

    plt.show()
    cv2_img.append(fg)