How to manipulate Delaunay Triangulation output?

Hello! This is my first time posting, please don’t hesitate to ask me questions or point things out I missed.

This is an example image of the types of patterns I see in my photos. I have a cluster of user identified points (multipoint tool), that then need to be analyzed for neighbor-neighbor distances. Note: I tried to identify my points via the threshold control, however, there isn’t enough contrast in my images to do this reliably.

Screenshot (61)

The problem is that the plugin that runs Delaunay triangulation connects all neighbors. In my samples this isn’t what I want; some connections are extraneous. My idea was to take the readout of Delaunay Triangulation, filter for the connections I want, and then recalculate average neighbor distance. (Arrows are unwanted connections)

Screenshot (60)

I found a macro posted on this forum to run Delaunay Triangulation analysis: [].

This is a great start, and almost does exactly what I want. It prints a readout of coordinate pairs. I’d like to associate the coordinates with point names in the ROI Manager, filter them for connections I’d like to keep, and then recalculate the output: object, number of neighbors, average distance, and then overall average distance. I can do all this by hand using the coordinate readout- ugh please help me not have to do it that way.

Coordinate return:

Analysis return:

The macro attempts to store the points from the coordinate readout using the segment of code below. However it does this wrong- it adds the same point multiple times, misses points, and the ones it does add aren’t associated with the readout numbering order.

Macro piece

(I tried the commented out in green section, still no go)

My basic idea was if I can get:

-ROI Manager to associate properly with the points
-user filter for correct connections
-feed list of correct coordinate connections to a macro
-end analysis of distance between neighbors, number of neighbors

And finally a nice wish:
-draw connected points that were counted.

I have more details as needed,
Thanks for the help!


1 Like

It unclear what your question is… here are some suggestions, not sure if they answer your post.

1.Your points are not accessible via a threshold function. They live in an non-destructive overlay and the threshold only applies to the underlying data.

  1. The Delaunay triangulation is generated exclusively from the points coordinates. If you want to create a graph that does not include those edges and takes into account the whole regions of the leaves that were “selected” by the user you need to first segment you image, only then detect the regions that were selected and only then construct a graph based on either the centroids or the whole region extent of the selected leaves. This is not straightforward with menu commands.
    However that graph you are after is not the delaunay graph. Perhaps you want the “adjacency graph”.

See this post onwards:

You can generate an adjacency graph form the voronoi segmentation of your image and RCC (explained down in that thread).

  1. If you want to extract the objects, I think you will need more contrast with background an. With thresholding you might be able to extract the bright objects, but not the dark one
    Perhaps try a white background or use a colour images.

Good luck!

Thanks for the reply!
That is not what my question was. The macro I found returns information in a table describing object number, neighbor numbers, and then the average distance to all those neighbors.

I want it to also return coordinate locations of those objects. It must’ve defined them, correct?

1 Like

Please define what is that you call an object.
To me, there aren’t any objects (i.e. binary regions or ROIs) defined in that image, there is just a set of points over which the graph was computed.
Strictly, neighbours (if you are referring to adjacency) would be best based on regions, not points.
It is difficult to help here because it is unclear what the graph you drew really represents. (e.g. the leaves under points 9 and 4 are clearly not neighbours yet there is an edge joining those points).
A neighbour region is best computed via the concept of adjacency (you dilate each region without merging until reaching stability, similar to a Voronoi diagram but starting from the regions themselves, not points, then look for adjacency relation between those regions).

Hey @gabriel,

I think @Whitney refers to nearest neighbors. If you span a Voronoi diagram starting from the points, you get neighborhood relationships and I guess that’s what we are talking about here.

Hey @Whitney,

I can offer you a library that allows working with lists of points, distance matrices describing their relationships in space, and touch matrices for mapping which is close by which: #clij. I think with that you can achieve your goal but I’m also not 100%ly sure what you want to achieve.

From a black image with white points, you can use spotsToPointlist to receive a list of coordinates. You can also label the points, and partion the whole image to find out which points are adjacent. An example for this procedure is shown here. Last but not least, you can measure the distance of adjacent/touching points or a given number of closest points. There are more methods for example for averaging distances over touching objects. An example workflow is shown here.

But before we step further down that road. May I ask what you want to measure by the end? Why do you try to connect the leaves in that image?


Yes, I selected the points with the multipoint tool which I saved to the ROI Manager for easy exact recall.

The graph readout is from the Delaunay Triangulation calculation that the macro runs. I have it set to clear the ROI manager and within the macro it attempts to add each point back into the ROI Manager as it goes through the list. It also creates a table that it populates with “object” number, distance data, and number of neighbors for each “object” or point that I fed into the program. (This should better define “object” )
This is how I have minorly edited the macro:
20200502DeLaunay_Triangulation_Script.ijm (4.0 KB)

1 Like

The issue I am raising is: that Voronoi diagram that we could construct from the point set, does not seem to represent the spatial extents of the leaves regions in the image (and by extension, their adjacencies). Sure one can do it anyway, but it is an abitrary set of points chosen by hand. Think of the reproducibility of the whole procedure. Check points 10 and 8. Would you call those two leaves “nearest neighbours”? There is leave 6 in between them…

Yes, that would be amazing! That is the kind of manipulations I am trying to achieve. I was attempting with the leaves to make my images more relatable.

I work with Zebrafish inner ear hair cells and have pictures of them clustered in different lattice patterns. The nature of the pictures is not high contrast- they are GFP images, and while the points are brighter, they aren’t uniformly brighter. Perhaps as I get better at imaging these it might improve to a point I could use something to help there.

As an example see the photo below: the “roundish” section in the middle has the lattice structure of points I’m attempting to describe. This one doesn’t because it is cropped, but often there are other green shapes of noninterest.

I am searching for (at least) a semi-automated way to describe how each of these hair cells are related to the next one within a given specimen. The Delaunay Triangulation output connects points (hair cells) that aren’t biologically relevant, so I was searching for a way to filter what it’s calling connections, and then receive a similar recalculated output- number of neighbors, and average distance (both to neighbors and across the whole structure). And displaying that “graph” might help others understand the calculations and check what is called a “neighbor”.

However, what you suggest using may be a more direct way of analyzing these photos- I have some concerns about defining the edges of the sturcture but I may be missing something. I have some coding background in python, but not a ton- mainly on an “I need this to work” basis rather than “build from the ground up”.


1 Like

Awesome, GFP marked nuclei. That’s what I’m used to :wink:

Check out this workflow until the step where the mesh is drawn. It’s almost what you’re trying to do. I can have a look at your data, too. But as you are obviously coding experienced, you may want to go ahead :smirk:

Let me know if this helps!


Not just nuclei- it’s membrane-bound- the points are the cilla :slight_smile:

Ah, I meant the opposite- that I’m ok at coding given enough time and a figurative hammer… but I still will need help…

All help is appreciated :smiley:
I’ll play with the workflow and definitely come back with questions! It would be cool to show the data as well… Talking science is always welcomed.


1 Like

I ran through your workflow completely with your sample data and it works! If I copy it directly from the source, it runs fine.

I thought I’d run one of my photos, and see if I can get it to run with that even if it has trouble identifying points correctly.

It’s having an issue with line 38:

// visualise the dataset
show(input, “input”);

Here is the error code it throws:
Screenshot (68)

It says “Undefined identifier in line 38” (copied above)- input was defined properly, I can see this when I type out the command again. I changed it to open(input, “input”); temporarily just to see if I had other issues.

A little farther down another command has the same error (line 51):

show_spots(detected_maxima, “detected maxima”);

I wondered if it was an invisible character I was copying over. In the menu Edit > Zap gremlins doesn’t fix this. If I copy the code lines directly from the file where it does run to the new one, it won’t run properly.

Also If I copy pieces of the code from the explanation page rather than the source page it also gives this same error.

Any ideas? What am I missing?

And the log window:

And finally here is the truncated macro I was attempting to run:
20200503edit_all_robData.ijm (1.7 KB)


Hey @Whitney,

thanks for trying out #clij.

You find the show() function by the end of the original script. Just copy it over.

Same here. You find the function by the end of the original script I’ve put these functions by the end to not confuse the reader of the tutorial at the beginning with these details.

So the error message in the log window tells you that RGB images are not supported. Try splitting channels or run("8-bit"); before executing the script. Furthermore, you are apparently working with JPG files. Can you avoid that format? It’s known for causing trouble and artefacts. Can your microscope maybe output TIF files?

Let me know if this helps.



This helped! I was most successful in changing through the menu to 8-bit, and then to 64-bit before processing. All of my newer images are tif, unfortunately some of the older ones are jpeg (converted them).

I have been running two of my images through - one that has clearly defined points and one that’s a little more ambiguous (most of them are this way). I am super impressed with the ability to identify correctly the points using threshold adjustments for each individual image. In the event that it misses some, I was thinking I could open photoshop and add clear spots? Did you have an alternative suggestion?

In either photo’s case, the distance mesh image isn’t created properly. It is blank every time I run it with one of my photos. For instance my more ambiguous photo (switched to jpeg just for embedding here):

Identifies points correctly (requires a 10 threshold)

I was guessing the distance mesh wasn’t being created because of another threshold setting. Even when I set the max parameter to something ridiculously huge (1000) nothing renders.

And final question for now:
When it is finished running, I’d like to see some of the calculations I care about to help describe changes (loss of center hair cells in the image above- sometimes it’s gain or more complicated). In the log, it lists how many spots were detected, but I’d also like to see

  • the average neighbor distance ( should be larger for holes)

It’s calculating this to generate the distance mesh, right? I should be able to get it to read it out to me then?
I know it’s calcuating averge distance between a node and all of its neighbors to generate the heat maps- could I get a table of those? It wouldn’t make sense without labeling all the nodes accordingly.

Basically, with the last part, I’m attempting to find a quantitative measure relating to the density of the points and how they change with each different condition I am studying. (some of them will have missing hair cells, some of them will have more, some will be more or less spaced out.)

Hey @Whitney,

Be careful, such a strategy is no good scientific practice and reviewers would likely reject a study where Photoshop was used to manipulate raw data.

Furthermore, try to find a strategy, that works on all images and does not require setting individual thresholds for each image. Key might be using correct file formats. Not JPG and 8-bit. Instead try to use original microscope images and automated thresholding. What microscope are you using btw?

Can you use the menu Image > Adjust > Brightness & Contrast to adjust min/max? Furthermore, if the mean intensity of the image is zero, no mesh was drawn…

There is a method averageDistanceOfTouchingNeighbors allowing you to measure that. Afterwards, you can use pullToResultsTable to get the values out for each object.

Play a bit with countNonZeroPixels2DSphere and/or averageDistanceOfNClosestPoints. They basically do what you want. But maybe wait a day: I just fixed a bug in the second function and will update clij2 tomorrow.

Let me know how it goes!


Hello! I figured out some of it!

But first:

Always, of course, paired directly with raw photos- I’m aware and am very cognizant of transparency in my analysis.

Now onto what I have found so far:

It wasn’t creating correctly because I have 2D images, rather than z-stacks. So substituting

Ext.CLIJ2_create2D(mesh, width, height, 32);
Ext.CLIJ2_create3D(mesh, width, height, depth, 32);

allowed proper rendering.

This was helpful:

I’ve been going through the command list as well, and have generated a macro to pull those results from the images. In macro there is no wildcard, right?

I know I’ll have at least a couple more questions, but I wanted to update.

Ah! I can get the specific ones in my next reply, but we have a Zeiss and an Olympus with an older Q capture camera set up. Saves as tiffs, switching the file format above killed the brightness/ contrast.

Whitney :smiley:

1 Like


So after I’ve generated the touch matrix I try to run this:

//calculate the average distance of the closest points (not working- prints 0s)
Ext.CLIJ2_averageDistanceOfNClosestPoints(touch_matrix, a, 3);

//calculate the average distance of the far points (not working- prints 1 or .5)
Ext.CLIJ2_averageDistanceOfNFarOffPoints(touch_matrix, b, 3);

I’m thinking It’s probably a syntax error, but I can’t find the right way to do it. The first is printing 0.000, and the second is printing 1 or .5.

Also after going through all the commands I don’t see a way to calculate average touch distance across the entire image, can you point me in the right direction?

I have it pulling the desired information to the results table with:

//prints minimum distance to touching neighbor (working)

//Rename results table
IJ.renameResults(input2 + " Hair Cell Distance Measurements")

And a few others similar to that result above (mean, max, ect). So next I tried to add a column to the table to describe each of the measurements. Adding a column didn’t work, so then I tried to rename the first one, or edit row contents:

//won't rename column
Table.renameColumn(X0, "label"); 
//doesn't rename field of results table
setResult("x0", 1, "label for data");

These aren’t working either. What is the best way to label the results table?


1 Like

Hey, @Whitney,

a minor issue: The averageDistanceOfNClosestPoints takes a distance matrix as input, with a touch matrix this can’t work, because a touch matrix is a binary image. Same for averageDistanceOfNFarOffPoints btw.

You can get the average distance for all touching neighbors by applying getMeanOfAllPixels to the result vector of averageDistanceOfTouchingNeighbors which takes a distance matrix and a touch matrix as input btw.

How did you try that?

Not sure what you mean by “Label the results table”? What are you trying to achieve here?

But regarding your distance analysis: You’re almost there, right?


Ah I think I have it all working well now! Thank you so much for your help :smiley: !

I fiddled with the tables, and it’s probably not the prettiest code, but it seems to be working well for my purposes. I ended up using Read and Write Excel to save the results tables, and adding labels through a round about way.

I believe thats all the useful calculations I can get from it! If I have another question I’ll start another thread and mark this one as solved.

Thank you for pointing me in this direction, and helping along the way.