Ridge Detection Feedback




I have been working on an Ops implementation of the Ridge Detection algorithm, adapted from the paper here and inspired by @twagner’s original plugin. The algorithm is not yet released in ImageJ2 but we would like to showcase its current development and receive feedback on the current and potential future features, especially from those for whom this op might be relevant (@hadim?)

As of now the Ridge Detection Op is able to take in an image composed of vessel or line-like structures upon a darker background (left) and is able to return a list of Polylines. The Junction Detection Op then takes that list of Polylines and determines the junctions between them. Below is a test image created in Fiji and a composite image of the results (test image in blue, polyline results in pink, junctions represented as 3x3 pixel boxes in white).

As of now those who would like to try out the filter can do so as follows, using a terminal or Git Bash:

  1. Clone branch ridge-detection from imagej/imagej-ops
  2. Checkout the branch (ridge-detection)
  3. Navigate to the imagej-ops repository on your machine and enter into the terminal: mvn -Dimagej.app.directory=PATH_TO_Fiji.app/. This one might take a few minutes because of the size of ops.
    4.Clone branch ridge-detection from imglib/imglib2-roi
  4. Checkout the branch (ridge-detection)
  5. Navigate to the imagej-ops repository on your machine and enter into the terminal: mvn -Dimagej.app.directory=PATH_TO_Fiji.app/ (the same command as in step 3). This one will take much less time than step 3 did.
  6. Launch Fiji and use the Groovy script posted below. Make sure to use a grayscale input image and to open the image in ImageJ before running the Groovy script.

Here is the script to run the Ridge Detection Algorithm:

#@OpService op
#@UIService ui
#@Img input
#@Double(label="Line width:") width
//refers to the width of the structure.
#@Double(label="Junction Threshold:") junctionThreshold
//refers to the pixel distance acceptible between two polyline
//vertices to be considered a junction.
#@Integer(label="Minimum Ridge Length:") ridgeLengthMin
//refers to the minimum size a ridge can be for it to be
//included in the Ridge Detection output.
#@Double(label="Lower Threshold:") lowThreshold
//refers to the minimum gradient value to be accepted when
//adding to polylines.
#@Double(label="Higher Threshold:") highThreshold
//refers to the minimum gradient value to be accepted when
//creating a new polyline.
#@Boolean(label="Find Junctions:") findJunctions
//determines whether or not the script will also find the junctions

import java.util.List
import net.imglib2.img.Img
import net.imglib2.img.array.ArrayImgs
import net.imglib2.RandomAccess
import net.imglib2.RealPoint
import net.imglib2.roi.geom.real.DefaultPolyline
import net.imglib2.type.numeric.real.DoubleType
import net.imagej.ops.segment.ridgeDetection.RidgeDetection
import net.imagej.ops.segment.ridgeDetection.JunctionDetection

long[] dims = new long[input.numDimensions()]
Img<DoubleType> output = ArrayImgs.doubles(dims)
Img<DoubleType> junctionOutput = ArrayImgs.doubles(dims)
outputRA = output.randomAccess()
junctionOutputRA = junctionOutput.randomAccess()
outputLines = new ArrayList<DefaultPolyline>()

outputLines = (List<DefaultPolyline>) op.run(RidgeDetection.class, 
input, width, lowThreshold, highThreshold, ridgeLengthMin)

for (DefaultPolyline d in outputLines) {
	for (int i = 0; i < d.numVertices(); i++) {
		RealPoint p = d.vertex(i)
		dims[0] = (long) Math.round(p.getDoublePosition(0))
		dims[1] = (long) Math.round(p.getDoublePosition(1))

ui.show("output", output)

	results = (List<RealPoint>) op.run(
			outputLines, junctionThreshold)	
	for (int i = 0; i < results.size(); i++) {
		RealPoint p = results.get(i)
		for (int x = -1; x <= 1; x++) {
			for (int y = -1; y <= 1; y++) {
				dims[0] = (long) Math.round(p.getDoublePosition(0) + x)
				dims[1] = (long) Math.round(p.getDoublePosition(1) + y)
	ui.show("junctions", junctionOutput)


There are a few more features that I have in mind for furthering the package:

  • Add support for 3D images, something not described in Steger’s paper but potentially a useful tool.
  • Add an Edge Detection Op (similar to Ridge Detection but detects the Edges of the structures rather than the centers), as described in Steger’s paper and implemented in Thorsten’s plugin.
  • Add an op to break up the Polylines around the Junction points. This is the way that Steger handles junctions in his paper but m
  • Add an op to link together Polylines from the same ridge that are broken apart due to an intersection, noise, or something else.

Before I add these features to the package I would like to assess their community impact. Any feedback and suggestions on the current and potential features is welcomed!

Vagaries of the Ridge Detection algorithm

It looks like an awesome job @gselzer. I am indeed very excited to test it since I currently use the original IJ1 plugin in the FilamentDetector plugin I am developing: https://github.com/hadim/FilamentDetector

Adding the IJ2 ridge detection ops is definitively something I want to do.

Will keep you in touch here soon.


I am running into a compilation issue using Eclipse. What I did:

  • Clone the ridge-detection branch of imglib2-roi and build it. No issue.
  • Clone the ridge-detection branch of imagej-ops and build it. Here is the error:
[INFO] -------------------------------------------------------------
[ERROR] /home/hadim/ij/imagej/imagej-ops/src/main/java/net/imagej/ops/segment/ridgeDetection/JunctionDetection.java:[118,80] incompatible types: net.imglib2.roi.util.RealLocalizableRealPositionable cannot be converted to net.imglib2.RealPoint
[ERROR] /home/hadim/ij/imagej/imagej-ops/src/main/java/net/imagej/ops/segment/ridgeDetection/JunctionDetection.java:[119,80] incompatible types: net.imglib2.roi.util.RealLocalizableRealPositionable cannot be converted to net.imglib2.RealPoint
[ERROR] /home/hadim/ij/imagej/imagej-ops/src/main/java/net/imagej/ops/segment/ridgeDetection/JunctionDetection.java:[120,81] incompatible types: net.imglib2.roi.util.RealLocalizableRealPositionable cannot be converted to net.imglib2.RealPoint
[ERROR] /home/hadim/ij/imagej/imagej-ops/src/main/java/net/imagej/ops/segment/ridgeDetection/JunctionDetection.java:[121,81] incompatible types: net.imglib2.roi.util.RealLocalizableRealPositionable cannot be converted to net.imglib2.RealPoint
[INFO] 4 errors 

It seems to be an API compatibility issue but I haven’t been able to figure out what is going wrong.


Even with compilation issue, I have been able to run the op with the following code (inspired by your script): https://github.com/hadim/imagej-sandbox/blob/master/src/main/java/RidgeDetect.java

I tried to play with the parameters but each time the output lines list is empty. Any idea?

(It looks like I can’t upload my image on the forum, so here is a link to it: https://drive.google.com/file/d/1WNWqyqj-H0F6-pMvd6jQBBmnFHpjyt-3/view?usp=drivesdk)


Somehow I am having an hard time converting your script to Java. So I tested your script directly and it seems to do a good job.

I’ll try to do more extensive tests and to integrate it to the plugin I am developing and see how it goes.

One suggestion would be to create an op to convert the lines to the binary image with this code you already use:

for (DefaultPolyline d in outputLines) {
	for (int i = 0; i < d.numVertices(); i++) {
		RealPoint p = d.vertex(i)
		dims[0] = (long) Math.round(p.getDoublePosition(0))
		dims[1] = (long) Math.round(p.getDoublePosition(1))

Even if it’s not my use-case, I think some people will want to just used this op to threshold their image. So I think to be able to do something like that:

outputLines = op.run(RidgeDetection.class, input, ...)
output = op.run(LinesToBinary.class, outputLines)

could be pretty useful.


Hi @hadim!

Yeah, this is due to the rework of Imglib2 ROIS by @awalter17. I’m not too sure about how to fix the issue but it seems like you have gotten around it so I am not too worried. I think this issue should be fixed by the time ROIs are completed.

This could be done. Do you have any suggestions as to the other features that I described above? @ctrueden and I are really trying to assess the need for other features as the current features become more polished and to do this we are looking for community input. Any suggestions as to what could be useful?

On another note, I was wondering if you had any input on the current state of the input parameters. I noticed that @twagner’s plugin used lowContrast and highContrast as inputs to the filter but I could not figure out what they signified about the image. He also used these parameters to auto-generate the two thresholds, which I think could be nice.

I also ran the script on your image, using a line width of 4, a minimum ridge length of 4, a lower threshold of 7, and a higher threshold of 20, and got this (white output against red input) result (compared to the original):




I need to integrate this op into my plugin to be able to do more extensive tests.

That being said, an op to break up the polylines would definitively be interesting and useful. We could also imagine other “polylines ops” in the future such as one that selects the “more straight lines” between four lines connected by a junction point.

Also in the original IJ1 plugin, the minimum ridge length does not exist (or did miss it?). Could it be determined by the line width?

As for the parameters, everything is nicely explained here https://imagej.net/Ridge_Detection. Can you confirm your line width parameter is used to calculate the sigma parameter? It would be useful to propose several variants of the ridge detection op that can take as input multiple sets of parameters. For example, one op that takes line width, high contrast and low contrast and the other one taking sigma, lower and higher threshold. Does that make sense to you?

Here is the code I use to compute the parameters https://github.com/hadim/FilamentDetector/blob/master/src/main/java/sc/fiji/filamentdetector/detection/RidgeDetectionFilamentDetector.java#L305

I really like this algorithm and I think it’s a very useful one but it’s true that the parameters are far from being intuitive. If you can think of a way to automatically defined the threshold parameters and let the user only set the line width/sigma, that would be fantastic! I can’t think of any clever way to do it… Or maybe a way to merge both threshold parameters into a single one parameter?

Also, support for 3D images would be awesome as the original plugin does not support it. But I don’t know how hard would it be to implement.

Anyway, this op looks good to be merged I think. Then it’s just a matter of iterating and opening new PRs in order to add more features to this op.

Again, great job.


Good to know. There was also interest here at LOCI in creating an op to link the straighter of the lines in an intersection, so this might be a good future feature.

You are right, this was something that I came up with to deal with noise and the skeletization that can happen at the end of wider lines. If someone does not want this parameter to affect the segmentation, they can always set it to 1, since the polylines that are output have to have at least 1 vertex as is.

Doh. This clears things up.

Yes, it does. I followed the paper which uses width / (2 * Math.sqrt(3)) as the algorithm for the sigma. Note that this equation does not add 0.5 to the result of the equation above like yours does, but I think we should keep it like this to stay true to the paper. You can find my calculation here. The contrast-based version would be nothing more than a calculation of highThreshold and lowThreshold and then a call of the threshold-based version so this would be easy.

Yeah, this would be nice, but I cannot think of a way of finding the lowerThreshold well on a large variety of images, each with a different change in value between the dimmer line points and the background. I’ll put this idea on the backburner, but I do not see anything coming from it.

Thanks a ton for your input! I think it will be a little while before this gets merged, seeing as how

  1. The ops have as of now no namespace. The plan was for them to inhabit the new SegmentNamespace that is being created on the Hough Circle Transform branch, so this branch needs to be completed first.
  2. The ops depend on the new ROI’s, which have not yet been completed.

However I will definitely keep you posted on any further developments with this op.




Also, does your op play nicely with the slice feature of ImageJ-ops? I didn’t test but in my case, I will be processing XYT stack so I will need to apply the ridge detection op on multiple images.

Not really related but do you know if slice can parallelize the call to an op (since eah calculation is supposed to be independan)?


The Segment namespace has now been added, so the only thing holding this back from merging is the ImgLib2 ROIs SNAPSHOT.

I talked with @ctrueden, and he said that Slicer should be viable as long as the typing works out. However it looks like the slicer only allows RAIs as inputs and as outputs, so an op to draw the List<DefaultPolyline> output of RidgeDetection onto a RandomAccessibleInterval might be necessary.

However he did say slice can parallelize the call to an op. :smile:


I have been able to run more tests and also start implementing your op in my plugin: https://github.com/hadim/FilamentDetector/pull/11

Results using IJ1 and IJ2 are similar but not exactly the same. Somehow IJ2 seems to be “more robust” to thresholds modifications. I don’t really understand where it comes.

One of the big regression compared to the IJ1 plugin is performance. In my tests, the op run 20 times slower than with the IJ1 implementation. I don’t know if it’s directly due to your implementation or to the Imglib2 library but that would be great if you have time to have a look.

Here is the script I used to measure performance and compare results from both the IJ1 and IJ2 implementation:

#@OpService op
#@UIService ui
#@Img input
#@ImagePlus inputPlus

import groovy.time.*

import java.util.List
import net.imglib2.img.Img
import net.imglib2.img.array.ArrayImgs
import net.imglib2.RandomAccess
import net.imglib2.RealPoint
import net.imglib2.roi.geom.real.DefaultPolyline
import net.imglib2.type.numeric.real.DoubleType
import net.imagej.ops.segment.detectRidges.RidgeDetection
import net.imagej.ops.segment.detectRidges.JunctionDetection

import de.biomedical_imaging.ij.steger.Line;
import de.biomedical_imaging.ij.steger.LineDetector;
import de.biomedical_imaging.ij.steger.Lines;
import de.biomedical_imaging.ij.steger.OverlapOption;

// Parameters
width = 4
ridgeLengthMin = 4
lowThreshold = 20
highThreshold = 60

def fillImage(img, lines) {
	long[] dims = new long[img.numDimensions()]
	Img<DoubleType> output = ArrayImgs.doubles(dims)
	outputRA = output.randomAccess()

	for (DefaultPolyline line in lines) {
		for (int i = 0; i < line.numVertices(); i++) {
			RealPoint p = line.vertex(i)
			dims[0] = (long) Math.round(p.getDoublePosition(0))
			dims[1] = (long) Math.round(p.getDoublePosition(1))

	return output

def detectIJ2(img) {
	lines = new ArrayList<DefaultPolyline>()
	lines = (List<DefaultPolyline>) op.run(RidgeDetection.class, 
									   img, width, lowThreshold,
									   highThreshold, ridgeLengthMin)
	return lines

def detectIJ1(img) {
	sigma = (width / (2 * Math.sqrt(3)))
	lineDetector = new LineDetector()
	stegerLines = lineDetector.detectLines(img.getProcessor(), sigma,
									 highThreshold, lowThreshold, 0, 99999, false, false,
					                 false, false, OverlapOption.NONE)
	// Convert de.biomedical_imaging.ij.steger.Line to DefaultPolyline
	lines = new ArrayList<DefaultPolyline>()
	for(stegerLine in stegerLines) {
		points = new ArrayList<RealPoint>()
		for(int i=0; i < stegerLine.getXCoordinates().length; i++) {
			points.add(new RealPoint(stegerLine.getXCoordinates()[i], stegerLine.getYCoordinates()[i]))
		lines.add(new DefaultPolyline(points))
	return lines


timeStart = new Date()
linesIJ2 = detectIJ2(input)
timeStop = new Date()
duration = TimeCategory.minus(timeStop, timeStart)
println("IJ2 implementatio took " + duration + " to run.")

timeStart = new Date()
linesIJ1 = detectIJ1(inputPlus)
timeStop = new Date()
duration = TimeCategory.minus(timeStop, timeStart)
println("IJ1 implementatio took " + duration + " to run.")


println(linesIJ2.size() + " lines have been detected using IJ2 implementation.")
outputIJ2 = fillImage(input, linesIJ2)
ui.show("IJ2 Output", outputIJ2)

println(linesIJ1.size() + " lines have been detected using IJ1 implementation.")
outputIJ1 = fillImage(input, linesIJ1)
ui.show("IJ1 Output", outputIJ1)

(Again I can’t upload my tif image so here is the link to it: https://drive.google.com/open?id=1ICoOEJfpMbeAbDzm_aCJW44Bj2q5dXMC)



I ran this new script on the first image you posted. I made no changes to the script.

Started Hadim_Ridges.groovy at Mon Feb 12 16:13:50 CST 2018
IJ2 implementatio took 0.906 seconds to run.
IJ1 implementatio took 0.112 seconds to run.
4 lines have been detected using IJ2 implementation.
0 lines have been detected using IJ1 implementation.

I also did not find any lines using the IJ1 implementation using the script you wrote. I rewrote the threshold parameters because I found that the best results from each filter required different lower and higher thresholds.

// Parameters
width = 4
ridgeLengthMin = 4
lowThreshold2 = 5
highThreshold2 = 20
lowThreshold1 = 10
highThreshold1 = 20


Started Hadim_Ridges.groovy at Mon Feb 12 16:25:24 CST 2018
IJ2 implementatio took 0.274 seconds to run.
IJ1 implementatio took 0.012 seconds to run.
7 lines have been detected using IJ2 implementation.
52 lines have been detected using IJ1 implementation.

It is obvious from these two trials that the Ops implementation took, like you said, a great deal longer. However the Ops result does have better return data, getting only the 7 structures visible in the image, while the IJ1 implementation has trouble at the end points.

I investigated the major source of the time that is taken up by the Ops Implementation with some simple System.out.println() statements with System.nanoTime(), grouping the op up into the data generation (setup) portion of the op and the DefaultPolyline construction of the op.

SetupTime: 0.6905766740001127
LineBuilding: 0.026686610999604454

Showing that most of the time is spent generating the images that store the Ridge Detection metadata. If I remember correctly the IJ1 implementation stores all of this data in a 1D array and does all of his calculations without separating the calculations into multiple ops, which together could explain the time difference.


Interesting results.

Do you think there is room for improvements on the IJ2 side to improve computation time? Since it seems to do a better job of detection that would be really awesome to have it to work as fast as the IJ1 implementation.

I also understand it could require a substantial amount of work…


Repeated calls into namespaces opService.create().img(...) could be a possible explanation since they trigger a lookup everytime they are used:

It might be worth your effort @gselzer, to re-use Op instances where possible. For instance, opService.filter().derivativeGauss(...) are four calls to the matcher, which could possibly explain the times.

Also, did you profile the runs to see where the time is actually spent?

And, as always, thanks for your efforts on this!!



This would be nice, but from what I understand there it would be very difficult to change the parameters to the op instance once it has been created. This would prove problematic, as each call to opService.filter().derivativeGauss(...) passes through a different integer array denoting the specific partial derivative convolution to be done. This is also a little bit beyond my understanding of Op structure. I will talk with @ctrueden about this next week and see if we can figure out a way to do this.

@hadim I just finished some tests for the two ops and the ROIs were just recently merged. I will work to remove the dependencies on SNAPSHOT versions but after that is done I think we can review the code and begin to look at merging this.


Great to hear.

About the performance, I hope it will be possible to fix it from the IJ Ops side because it’s really concerning to me.

With the IJ1 implementation and with the kind of dataset we have, it’s possible to easily tweak the parameters of the detection almost in real-time since the implementation is quite fast. I am afraid it will be less convenient to do it with the IJ2 implementation since it’s quite slow.

Now I understand that the flexibility of the Ops framework comes with a downside to the performance side… I really hope this issue can be addressed in the future.


You could add a setIntegerArray(int[]) method (obviously, with a different name) and set the integer array before calling compute() on the same op. That way, you can circumvent the matching.


It is not supposed to. Quite the opposite, actually: the framework lets you write multiple implementations of an operation for different cases that are each tuned for performance.

I’ll meet with @gselzer next week to find the best way forward here.


Gents, has there been any recent work on this plugin? I’m using Thorsten Wagner’s Ridge Detection to measure the length distribution of straight fiber fragments which are often touching, lie parallel or are crossing, and am having a difficult time to cleanly identify and measure all the individual fragments.

I use smaller sample images taken from the real-life ones with the toughest elements in them (see above a typical example of a 36-fragment test image) to optimize the various parameters quickly, but finding a global optimum set of parameter in this 6+ -dimensional parameter space is a major chore. For example, I can’t eliminate some spurious junction points without affecting some other results, but if there was a way to specify that the curvature of the fragments as infinite, i.e., that they are straight lines, it would simplify the problem.


Hi @TG3!

I noticed you said that you were still using Thorsten Wagner’s version; we pushed the newer Ops version shortly after this discussion. Have you checked this newer version out?

Unfortunately one of the drawbacks of the particular Ridge Detection algorithm that we use is that the filter tends to form its lines against the most “line-like” data point neighboring the current one instead of always looking in a straight line. This proves particularly useful when extracting a non-linear ridge from an image since our next ridge point is (naturally) not always going to be in a straight line. However, as you have noticed, this results in the algorithm having trouble any time it reaches a junction. Although we do not do this in the Ops version (if I remember correctly Thorsten does this), the algorithm normally breaks up all ridges at the junction points. This is done since in many cases (for example, think of three ridges coming to an intersection with a 60 degree angle in between each) it can be difficult to determine exactly which of the two (both “line-like”) points is the continuation of a line and which is the intersecting one. However this attempt at being objective about creating the most “line-like” lines, along with the Gaussian being done within the algorithm, is probably what is causing the issues in your case. There was talk of creating an algorithm to go over the results and reform lines to fix some of these junction errors that you seem to be having, but no developments were made.

With our newer version of this algorithm we implemented the algorithm that Thorsten’s Version uses to form the ridges as an Op, and we also created a Junction Detection Opto find those junctions in the lines. There was talk of implementing more features (see the first posts of the thread), but, as said already, none of this was really developed due to lack of a tangible use case.

If there is anything specific that I can help you out with, don’t hesitate to ask!


Vagaries of the Ridge Detection algorithm