Retrieve outer contour of a discontinuous shape



I am trying to get the contour from a segmented object, the problem is that the segmented object is discontinuous so I get several selection and several contour. Unfortunately “Fit spline” of “Fit Convex Hull” do not work with several selections.
The step before the thresholding that leads to the image below is “Find edges”, and I could actually try some snake contour detection but the result are not so good. Now that I have a binary image, it does not work since I guess there is no gradient of gray level to make the snake converge.

Something that would work would be to retrieve the outer points of each contour and merge them into a larger contour.
Should I convert each small contour into a point selection and iterate over the points to keep the outer one or is there some more efficient way ?


EDIT : skeletonize is probably the next step so I have nice lines, but I still need to merge the outer ones into a contour, again fitting a spline would be nice


Hi Thomas,
I think you will have a hard time tuning a spline to fit well. If you tune for the concave areas it will bow inwards at the gaps. If you set it to handle the long gaps it will span the concave areas.
You might consider uploading the original image to get suggestions for decreasing the gaps.

You can get a convex hull with the attached code.
Draw a box around the pixels you want to include.
Add the box to the ROI manager by: Click Edit->Selection->Add to Manager
Run the code.
On the upside the code handles multiple ROIs, just add more ROIs to the manager.
The downside is that I just hacked it out of a different routine and it is pretty ugly.

Note: I’m sure there are guidelines on how to post code but I couldn’t find them this morning…

import os
from ij import IJ, ImagePlus, WindowManager
from ij.measure import ResultsTable
import random
from ij.plugin.frame import RoiManager
from ij.plugin import ImageCalculator
from ij.gui import OvalRoi

# Orientation of ordered triplet of points
# Result is 0 for colinear
# Result is 1 for clockwise
# Result is 2 for counterclockwise
def orientation( p, q, r ):
	result = (q[1] - p[1]) * (r[0] - q[0]) - (q[0] - p[0]) * (r[1] - q[1])
	if result > 0:
		result = 1
	elif result < 0:
		result = 2

	return result

# Convex_Hull returns a list of points that define the convex hull
# of the list of pixels that is passed into it.
# It treats pixels with the value of zero as background and all others as object
# It is crude and designed for 8 bit grey scale or binary images
# pixel[x,y,value]
# point[x,y]
# TODO: Make a second pass through the hull points to remove intermediate colinear points.
# It currently ignores some colinear points and not others. It's a function of the points order
# in the input array.

def convex_Hull(pixels):
	points = []

	# Collect all the object points from pixels
	objectPoints = []
	for pixel in pixels:
		if pixel[2] > 0:

	# Find the upper point on the left edge
	# It will be the starting point for the march
	startingIndex = 0
	startingPoint = objectPoints[startingIndex]
	for i in range(1,len(objectPoints)):
		# if point is more left of the current startingPoint use it
		if objectPoints[i][0] < startingPoint[0]:
			startingPoint = objectPoints[i]
			startingIndex = i
		# if point is directly above the current startingPoint use it
		elif objectPoints[i][0] == startingPoint[0] and objectPoints[i][1] < startingPoint[1]:
			startingPoint = objectPoints[i]
			startingIndex = i
	# Run the march
	p = startingIndex
	foundEnd = 0
	i = 0
	n = len(objectPoints)
	while foundEnd == 0:
		q = (p+1) % n
		for i in range(n):
			if orientation(objectPoints[p],objectPoints[i],objectPoints[q]) == 2:
				q = i

		p = q
		if p == startingIndex:
			foundEnd = 1

	return points

# This code assumes that you have selected the areas you want 
# a hull draw around and those selections have been added to the ROI Manager.

rm = RoiManager.getInstance()
if not rm:
  rm = RoiManager()"Duplicate...", "title=Working")
working = WindowManager.getWindow('Working')
impWorking = WindowManager.getImage('Working')
ipWorking = impWorking.getProcessor()

IJ.runMacro("setForegroundColor(32, 32, 32)")

blobCount = rm.getCount()

for i in range(blobCount):
	# Get the points inside the ROI and convert them to pixels by adding the grey value
	edgePixels = []
	roi = rm.getRoi(i)
	points = roi.getContainedPoints()
	for point in points:
		brightness = ipWorking.getPixel(point.x,point.y)
		# Only collect pixels that are part of the edge detector output
		if brightness > 0:
		# Test code to show the original blob
		if brightness == 0:
	if len(edgePixels) > 3:
		hullPoints = convex_Hull(edgePixels)
		#debug code to show hull points on edges image
		for point in hullPoints:
		polyString = "makePolygon("
		for k in range(len(hullPoints)-1):
			polyString = polyString + str(hullPoints[k][0]) + "," + str(hullPoints[k][1]) + ","
		k = len(hullPoints)-1
		polyString = polyString + str(hullPoints[k][0]) + "," + str(hullPoints[k][1]) + ")"
		#print polyString				
		IJ.runMacro(polyString)"Fill", "slice")


You could give dilate and erode a try.


Thanks for your code example! It’s perfectly fine to use the code formatting button (</>) of the embedded post editor to format the code you post.
Note that I edited your post (using code fences ```, a feature of Markdown) simply to add automatic syntax highlighting :slight_smile:


Thanks @GregR for the code, the resulting convex Hull looks good to me, at least for this application !
So it is indeed based on an iteration over individual pixels. I will try to play with the code if I can use the start and end points of the contours that I have generated (discarding some contours on the way).

Here is the initial image

It is difficult to isolate the specimen from the background using thresholding only. My current workaround is to run a light Gaussian smooth, then “Find edges”, Thresholding (picture of the initial post) and skeletonize to recover a set of contours.

@yempski, I tried morphological operators but in this case I would need to dilate so much that I would really lose the initial shape…


for the example image this code is doing the job

run("Duplicate...", " ");
run("Subtract Background...", "rolling=50 create");
doWand(206, 82, 110.0, "Legacy");
roiManager("Select", 0);


While the built-in Edit > Selection > Convex Hull command returns the convex hull for a single selection and requires this selection to be Point or Polygon, there’s a 3D Convex Hull command (Plugins > 3D > 3D Convex Hull) in the 3D ImageJ Suite by @ThomasBoudier that works very well on 2D images as well. It takes a label image as an input, so if you supply a binary mask, it will treat all positive (255) pixels as a single object with id 255.


The close operation does not dilate the initial shape and should be suitable as long as you have your object well isolated from any other objects around it. For example, using a circle shape with radius 50:


How about some segmentation using machine learning, e.g. using Trainable Weka Segmentation?

Resulting probability map:



While this might be absolutely suitable for the given task, note that the selection generated by this code doesn’t enclose all objects of the input image.


Nice trick to use the “generate background” option !

This is good to know !

The 2 previous methods work well on the example image, however as soon as there is some blob remaining from the background the hull is deformed of course. I can remove the blobs by morphological operations but the goal is to apply the solution on a large amount of images so probably that the machine learning alternative would be the most robust, especially it makes even easier the detection of the object and cropping. Maybe with Ilastik this could be done more easily ? I will try both :smiley:

This is nice too ! Was it from the graylevel picture ? If so can you recall the different steps. Thanks


Hi Thomas,
It looks like you might want to revisit your illumination if possible. You probably don’t want dark field but a low angle ring light might make it pop.




I agree that the brightfield is not the best option to distinguish the object from the background. The thing is that we are using a custom screening microscope, and for some reason we don’t have much flexibility for brightfield illumination, so no darkfield or DIC is currently possible.

It is also part of the project to investigate some ways to improve the signal to noise ratio on such images. I haven’t apply any background correction on the images posted above, yet rolling ball or modelling the background with second order polynomial usually help too.