Ideas on detecting center of pattern

Hi all,

So we have some really fun images of a gold pattern etched on a coverslip

the idea is to calculate the center of the pattern. I had thought of using something like a Hough Transform and find some place where all the lines would converge, but have not managed to wrap my head around it.

Another approach is to find candidates via a ‘find maxima’ detection after normalization and filtering with a LoG filter, then picking the candidate closest to the center of the image.

Seeing as the image can be quite off-center, this last method is not so great.

So I was hoping to get some new insights from the ImageJ community gurus to tackle this problem in a different way, perhaps. All suggestions welcome


The first method that came into my mind was the Hough Transform, too. But this will be related to a huge computational effort.

Then I tried to get rid of the lines by a gaussian blur (radius of 8 px), invert the image and use Find Maxima…. The central spot gets selected, but 4 more maxima are present.

run("Gaussian Blur...", "sigma=8");
run("Find Maxima...", "noise=15 output=[Point Selection] exclude");

I may have a non elegant solution:

  • Threshold the image so as to keep only the bright, elongated structures.
  • Analyze particles (set a high minimal size to get rid of samll spots) with Feret’s diameter.
  • The results table should give you a set of FeretX, FeretY and angles, one for each “particle”.
  • From these you can calculate a set of equations corresponding to the Feret lines (e.g. in Python, Octave or Matlab)
  • Then, calculate the intersection of all these lines.

Another proposition, maybe more elegant, and for sure faster, which is to get rid of the stripes using the structure tensor:

  • in OrientationJ, choose Window 10,
  • then display the energy.
  • Threshold it.
  • You obtain a mask that you can combine with your initial image (Image calculator / AND), then look for the minimum in the combined image.


Here is another first thought on how to find the convergence point of the lines:

Compute a directionality texture feature. I am not sure what exists in Fiji for texture features, but gray level co-occurrence matrices (GLCMs) combined with texture contrast (or correlation feature) should allow you to compute a dominant angle for each pixel.

For each pixel and a specified GLCM offset distance, you find the angle which minimizes texture contrast (or maximizes correlation).

You could then convert each pixel and angle into a line (pixel location is a point and the angle is a slope) and determine where those lines would intersect.

1 Like

I have no idea if this works for similar images or only for this specific one (no image processing theory involved - just try and error), but for this one I, detected the center using this procedure:

run("Duplicate...", " ");
run("Mexican Hat Filter", "radius=2");//uses this plugin:
run("Gaussian Blur...", "sigma=5");
run("Find Maxima...", "noise=50 output=[Point Selection] light");
roiManager("Select", 0);


Process->Find Edges also work very well:

And then e.g., Process->Make Binary, Process->Binary->Find Maxima (>400, light background):

1 Like

Not a method itself… a kind of manual Hough transform:
Set a (random) point on the boundary of the image, now create a line selection with one extreme at that point and while keepin this fix do a profile plot at small angular increments. You will get the periodicity of the pattern. Variation will be maximal when crossing the pattern perpendicularly and minimal when you end up aligned with one of the grooves. That minimum will give you one line. Repeat the procedure with another random point. The intersection of the two line will reveal the centre of the pattern.

1 Like

Dear all who have replied:

Thank you so much for your suggestions which I have tested on a few of my images with varying success.

So far the most promising has been given to me off-list by Dr. Gluender:

Perform a summed projection of the pixels along the X and Y axes. Then find the position of the maximum on each 1D curve, which matches the position of the center on the image.

Because the center can be bright-ish or dark-ish depending on the focus, there was the risk that the maximum was slightly offset from the real center.

A final step with a small gaussian blur and a find maxima ensures that the center is well selected.

Again a big thank you for all your inputs!

I’ve added the code below. One day someone might find it useful ^^


	public Point2D computeCenter(ImageProcessor ip) {
		// Remove high frequencies
		ip = ip.convertToFloat();
		ImageProcessor ip_blur = ip.duplicate();
		ip.copyBits(ip_blur, 0, 0, Blitter.DIVIDE);
		//new ImagePlus("Test", ip).show();
		// Project
		// Ignore 15 pct of image to the left and to the right
		int ignore_px = (int) Math.round(0.15*ip.getWidth());
		int ignore_py = (int) Math.round(0.15*ip.getHeight());
		double[] xSum = new double[ ip.getWidth() - (2*ignore_px)];
		double[] ySum = new double[ip.getHeight() - (2*ignore_py)];
		for(int i=0; i<ip.getWidth() - (2*ignore_px); i++) {
			for(int j=0; j<ip.getHeight() - (2*ignore_py); j++) {
				xSum[i] += ip.getPixel(i+ignore_px, j+ignore_py);
				ySum[j] += ip.getPixel(i+ignore_px, j+ignore_py);
		int[] px = MaximumFinder.findMaxima(xSum, 100, false);
		int[] py = MaximumFinder.findMaxima(ySum, 100, false);
		Point2D chosen = new Point2D.Double(px[0]+ignore_px, py[0] + ignore_py);
		chosen = optimizeCenter(ip, chosen);
		return chosen;

	protected Point2D optimizeCenter(ImageProcessor ip, Point2D chosen) {

		// Blur around to get the fine-tuned position
		ip.setRoi(new Roi(chosen.getX()-75, chosen.getY()-75, 150, 150));
		ImageProcessor ip_crop = ip.crop();
		Polygon fine_tuned = new MaximumFinder().getMaxima(ip_crop, 1000, false);

		// Realign to image, just add top left corner of chosen - 75
		chosen.setLocation(fine_tuned.xpoints[0]+chosen.getX()-75, fine_tuned.ypoints[0]+chosen.getY()-75);

		return chosen;
1 Like