Creating object class with rois from multiple channels

I’ve been working a script that would automate quantification of images for my research. Technically I’m doing immunofluorescence cytology on meiocytes, but I prefer this snake analogy. My images are 3 channel stacks of multiple snakes with marks for the head (channel 1) at one end and spots (channel 2) somewhere along the body (channel 3).
The 3 main measurements I want from this script are: length of the body (skeleton), number of spots on the body and distance of each spot from the head.

I have been able to cobble together a macro script that does 30% of what I want it to do. My macro can give me with the coordinates of these rois (body, head, spots), but the code has metastasized and the output is too disorganized (can’t tell which measurements are for what snake).

I’ve come to the realization that I need to rewrite my code in js or jython to achieve the nimbleness of a object oriented language. I need to create an object for each snake and assign the head and spot coordinates. I’m having a hard time moving from macro code to a version in Jython or js.
I think that the AnalyzeSkeleton plugin gives a good start for the objects I need to write, but I’m lost in how to manipulate this plugin in my own code. Mainly, I am not sure how to attribute other elements to this object, such as the position of the head and position of the spots along the snake body. Is there a way to extract the object from AnalyzeSkeleton and incorporate objects or rois from other channels?

thanks in advance

Hi @petersoapes

Welcome to the ImageJ forum!

Regarding your question, this sounds like a task for the ImageJ-Integration in KNIME Image Processing.
If you could upload a few example images and your script for reference, I am sure we can help you out.

Here are some KNIP example workflows:

  • Count Chromosomes
  • High-Content Screening
  • Tutorial: Spot Detection
1 Like

Hi @tibuch

I actually have just downloaded KNIME this afternoon. It has gone relatively smoothly except I can’t install the Image Processing Extension. I’ve emailed KNIME, so perhaps I’ll get an answer soon to start learning KNIME workflows.

I’m hesitant to upload my macro since it’s >100 lines and the output is not very readable.

Hi @petersoapes,

you downloaded KNIME 3.0 I guess? KNIME Image Processing for KNIME 3.0 will be released on Friday. Until then you could use KNIME 2.12.1 (which is also available for download on the KNIME website) or you unfortunately have to wait. Sorry for the trouble :wink:


good guess, you’re right! I got 2.12.1 working now. Excuse me while I climb up the KNIME learning curve.

1 Like

@dietzc, @tibuch - the example workflows you shared are great, but what’s the recommended path for acquiring basic KNIME knowledge as a beginner? Just watch the KNIME videos and then the KNIP videos?

@petersoapes can you share your macro code and sample images? There are pros and cons to both continuing with macro development or a KNIME workflow; but in both cases, being able to see the data and what you’ve already tried will greatly increase the chances that someone can help.

1 Like

@hinerm the easiest way is to watch the KNIME videos and go through all the example workflows on (which are soon moved to the KNIME example server). If there are any questions, we can help in the KNIME forum or here.


I tried uploading an example of the images I"m working with but it would not upload properly as a tif file. Uploading it to imgur seemed like the only solution.

Below is my macro code. I hope this is the right way to post this kind of file. The goal of this macro is to measure the distance from a green foci to the nearest blue foci following a red skeleton for each of the red objects. The biggest problem I am having with it is how to link roi’s together across channels. I’m able to isolate the distinct red, green and blue rois, but it is very hard to know which roi’s link together using the roiManager. I’m afraid the code has turned into quite a hairball :scream: (part of the reason I decided there must be a better way, js? jython? KNIME?)
I apologize in advance for the disorganized output. If anyone has encountered something to this and has suggestions for preforming this task I’d be eternally grateful. But in the meantime I think that I’ll try learning the KNIME toolbox for this task. :wrench:

//This macro takes a 3 stacked 16 bit immunofluorescance images as input.
//The primary goal of this macro is to identify rois main cell features (blue:centromeres, 
//red:SC and green:foci) and measure the total distance of skeletonized SC, number of foci on each SC and distance of each green foci from blue roi.
//Output is printed to log and results page.
//followTwo function for measuring distance of skeleton
function followTwo(x,y,value) {
condition = true; pixels = 0; u = x; v = y; a = 0; b = 0;
    // Find which direction to go:
    count = 0; i = 0; j = 0; ip = u-x ; jp = v-y ; 
    //count = neighbors; i,j = direction to be moved; u,v = previous position; a = t-movement; b = x-movement;
    if(getPixel(x,y+1)>=value){count++; if(ip == 0 && jp == 1){ }else{ i = 0; j = 1; }}
    if(getPixel(x+1,y+1)>=value){count++; if(ip == 1 && jp == 1){ }else{ i = 1; j = 1; }}
    if(getPixel(x+1,y)>=value){count++; if(ip == 1 && jp == 0){ }else{ i = 1; j = 0; }}
    if(getPixel(x+1,y-1)>=value){count++; if(ip == 1 && jp == -1){ }else{ i = 1; j = -1; }}
    if(getPixel(x,y-1)>=value){count++; if(ip == 0 && jp == -1){ }else{ i = 0; j = -1; }}
    if(getPixel(x-1,y-1)>=value){count++; if(ip == -1 && jp == -1){ }else{ i = -1; j = -1; }}
    if(getPixel(x-1,y)>=value){count++; if(ip == -1 && jp == 0){ }else{ i = -1; j = 0; }}
    if(getPixel(x-1,y+1)>=value){count++; if(ip == -1 && jp == 1){ }else{ i = -1; j = 1; }}
    // setPixel(x,y,value); // print(steps + " " + x + " " + y);
    // Check to see if we should continue:
    if(pixels > 0 && count == 2){ u = x; v = y; x += i; y+= j; pixels++;} 
    else if(pixels==0 && count<=2){ u = x; v = y; x += i; y+= j; pixels++;}
    else {condition = false;  pixels++; i = 0; j = 0;}
    if(abs(i)+abs(j) == 1){a++;} else if(abs(i)+abs(j) == 2){b++;}
distance = a + sqrt(2)*b; result = newArray(x,y,distance,pixels);
return result;

run("Set Scale...", "distance=0 known=0 global");//remove previous scales
T = getTitle;
run("Stack to Images");

selectImage(3);//centromere area ~30
blueImage = getTitle();
setAutoThreshold("Minimum dark");
run("Convert to Mask");
run("Analyze Particles...", "  show=Outlines display exclude summarize add");
Xlist = newArray();
Ylist = newArray();
for (i=0;i<roiManager("count");i++){
	x = getResult("X", i);
	Xlist = Array.concat(Xlist, x);
	y = getResult("Y", i);
	Ylist = Array.concat(Ylist, y);
	centAi = getResult("Area", i);
	print("the area of centromere "+(i+1)+" is "+centAi);
 centromeres = roiManager("count");//centromere indices
 print("number of centromeres: "+centromeres);
roiManager("Rename", "centromere");//renaming to keep track of rois

//process red channel
redImage = getTitle();
setAutoThreshold("Default dark");
run("Convert to Mask");
//create list of SC rois
run("Analyze Particles...", "size=100-Infinity display exclude summarize add");
for (i=centromeres;i<roiManager("count");i++){
roiManager("select", (i));
roiManager("Rename", "SC");

//loop through each centromeres' central point and try running followTwo, 
//if followTwo unsuccessful, randomize the coordinates and try again.
//the end coordinates are then fed into followTwo for full SC length, incase starting point was not the very end
ft_coords_X = newArray();
ft_coords_Y = newArray();
for(k=0;k<centromeres; k++){
c=0;//counter for number of attempts
		q = round(Xlist[k])+(round((random()*4)-3));
		p = round(Ylist[k])+(round((random()*3)-3));
		at = followTwo(q, p, 255);//
		if(at[2] >= 2){
			c= c+200;
		}else{} }
a=followTwo(at[0], at[1], 255);//new end coordinates as a[0],a[1]
SCarea = getResult("Area", k);
print("x_end:  ", a[0]);
print("y_end: " , a[1]);
//a[0] a[1] are other end of centromere, (telomere)
print("redo length for "+k+"  ", a[2]);
row_n = k+(centromeres-1);
setResult("X telo corrd", row_n, a[0]);//printing SC information to results window
setResult("Y telo corrd", row_n, a[1]);
setResult("SC distance", row_n, a[2]);
ft_coords_X= Array.concat(ft_coords_X, a[0]);//for looping through with foci and centromeres
ft_coords_Y= Array.concat(ft_coords_Y, a[1]);
sc_length = roiManager("count") - centromeres;
print("SC number: "+sc_length);
//process green:foci channel
greenImage = getTitle();
imageCalculator("and create", redImage, greenImage);//only accessing foci overlapping SC
run("Convert to Mask");
run("Analyze Particles...", "size=1-Infinity display exclude summarize add");
fXlist = newArray();
fYlist = newArray();

//looping through centromeres and SC, to make sure that objects have SC and centromeres for foci to be counted
//loop through SC
for(sc=sc_length-1; sc<sc_length+centromeres;sc++){
	fx = getResult("X", foci_cnt);//center points of green foci
	fy = getResult("Y", foci_cnt);
	roiManager("select", sc);
	//selecting SC measuring foci
	if(selectionContains(fx, fy)){
		print("foci "+(foci_cnt+1)+" found in SC "+(sc+1)+" with SC "+(sc+1));
		for(cent_count = 0; cent_count<centromeres; cent_count++){
			centx = getResult("X", cent_count);
			centy = getResult("Y", cent_count);
			if(selectionContains(centx, centy)){//if SC contains centromere also
				cnt=0;//trial counter
				fociDistance = followTwo(fx, fy, 255);//fTwo of foci
						q = round(fx)+(round((random()*4)-3));
						p = round(fy)+(round((random()*3)-3));
						fd = followTwo(q, p, 255);//fTwo from foci
						if(fd[2] >= 2){ 
						    cnt= cnt+100; 
			          totalSCDistance=followTwo(fd[0], fd[1], 255);//from SC centromere to end of SC
			          ft_fociDistance=followTwo(ft_coords_X[cent_count], ft_coords_Y[cent_count], 255);
			          print((sc+1)+":"+(foci_cnt+1)+" fT first measure "+fd[2]);//distance from foci to nearest end
			          print((sc+1)+":"+(foci_cnt+1)+" 2nd measured distance "+totalSCDistance[2]);//distance of entire SC
			          setResult("foci dist from end", sc+centromeres,ft_fociDistance[2]);
			          print((sc+1)+": % position along SC "+(fd[2]/totalSCDistance[2]));//position of foci by percentage of SC

If you use KNIME, I think you will need some dedicated code in addition to what already exists. We hope to have a (BETA) integration of the ImageJ2 Scripting capabilities released by the end of next week. Then you can add the (potentially) missing piece.

You want to measure the distance per spot to the purple end-points of the read-line, right? What if the line is not straight, e.g. makes a curve. Do you want to measure the distance along the red-line or just the euclidean-distance to the next spot?

You want to measure the distance per spot to the purple end-points of the read-line, right? What if the line is not straight, e.g. makes a curve. Do you want to measure the distance along the red-line or just the euclidean-distance to the next spot?

I guess a more accurate description of my goal is, to measure the number of ‘red’ pixels (post skeleton-izing) between the green foci and blue spots. The followTwo function is able to do this for curved lines. My anticipated hiccup is with multiple SCs that overlap. A possible solution would be to choose the red path to the blue spot with the least sharp turn. For now though, I am planning to skip over those.

Wow you guys work very quickly. I look forward to trying it out.

Below is my macro code. I hope this is the right way to post this kind of file.

@petersoapes the forum has decent code formatting so what you posted is :thumbsup:

Because your code is fairly involved, using Gists on GitHub is a nice alternative. It also allows us to track changes over time, which can be very helpful.

My updated macro can be found in this gist. I focused on a few specific goals for now:

  • Identify the analysis areas that need improvement (search for “TODO” comments)
  • Standardize formatting (I just pasted it into an editor that handles indentation nicely - in this case, Eclipse)
  • Identify the centromere associated with each skeleton (See this Roi.contains use)

You did a good job documenting the functionality of your macro, but because of its complexity it can help to be even more verbose. I added additional documentation where I felt it was useful.

I left the foci analysis commented out for now because of the limitations of detecting overlapping SCs. It also seems like my changes broke the SC length calculation somehow so I’d have to look at that again.

Given how functional your code right now seems I feel like it’s not far from meeting your needs. Once a ROI is added to the manager it can be applied to any image, so it’s just important to name your ROIs in a meaningful way to remember where they came from (which you had done with centromeres and SCs, I just added numbering). For the other measurements I think we can just add more columns for SC/centromere indices and it will then be easy to tell what’s linked together.

This is reasonable. I think you could easily modify the followTwo method to remember the last few steps and when it gets to an ambiguous point, mirror those last steps and fit to the nearest points. It would be nice if we could just use an already-implemented, proven algorithm to do this though. :smile:

Anyway, my takeaway for now is… look at my changes, let me know if I got anything wrong and what else I missed (besides foci analysis, which I’ll look at tomorrow).


@petersoapes there is a ridge detection plugin that may be helpful here… as it at least identifies the different segments and junction points.

thanks, I’ve downloaded the plugin and I have started working with it. My version of the plugin doesn’t have a ‘add to manager’ so I’m not sure how to incorporate it into the macro.

That’s odd… I’m not sure how you installed it, but if you enable the Biomedgroup update site it seems correct.

@twagner - do you have any thoughts on how to use Ridge Detection to produce distinct ROIs from overlapping lines? It seems like all the info is there. Your plugin seems like the best I could find for simple line detection so it would be awesome if there was a checkbox to output overlapping ROIs instead of cutting segments at the junction points.

see also: this issue I just opened


Sorry, I hadn’t realized most plugins should be added to the Updater list. Checking the Biomedgroup for the update sites fixed the missing 'Add to roiManager" option. Thanks

Hi @hinerm, @petersoapes

I’m interested in the recombination of single line segmentents into “full” lines as well. But I think it is not that easy, please read my comment at your issue:

I think the best way to go is to generate all possible lines combinations, and then let a trained SVM decide which one is good combination and which not. What do you think?


1 Like


Thanks so much for your help! I was struggling with an Roi.contains() function for a week before giving up. It’s amazing how much another pair of eyes can help. I added a fork to gist to start adding more things to the macro code.

I agree that the segmentation is one of the more pressing issues for the code. I will playing with the followTwo function to see if I can create something that tracks the angle of the paths. As for how this issue would affect my data; At this stage I am more interested in extracting the position of foci on clean single SC’s. If I can’t resolve the segmentation issue, removing these SC’s with >1 centromere would be an acceptable solution.

I had a clarification question concerning code around line 145 of the gist. I have the output print ‘redo length’ which is a second SC length measurement starting from the end coordinates of the first run of followTwo and going to the opposite tip of the SC near the centromere. This seems to be broken for all SCs, including clean straight SCs.
Is this the followTwo reversibility problem you mention in the comments? Could you elaborate a bit? I don’t understand how the segmentation (overlapping SCs right?) would prevent followTwo reversibility in a single straight SC.

Also I feel obliged to disclose the source followTwo function. The author shared it replying to a reddit post. Then I met them irl at the madison ImageJ conference two days later!

Sure, what I meant was… because followTwo is operating on pixel values and the skeletons overlap it may not find the original starting point when it’s called from the “end point.” Consider two overlapping skeletons - you get a branching line with four end points - call those points A, B, C and D. If we call followTwo starting at A, it may end at point B. But if we call followTwo on B it may end up at point C or D… because there is a continuous line of pixel values connecting these four points.

Anyway! After working on the aforementioed Ridge Detection I believe I have a macro that does everything you want. See my updated macro.

There are a few things to note:

  • There is a break point in the script where you need to manually run Ridge Detection. This is because I haven’t figured out how to get the overlap heuristic to be properly recognized by the macro recorder. I plan to eventually fix this, but it is at least functional now.
  • I did not optimize performance of associating foci or centromeres with skeletons. I’m iterating through the skeleton coordinates because calling contains on the skeleton rois proved unreliable. I noticed potential errors in the calculations that I assume are due to the use of the roi manager making repeated selections… it’s possible that it’s a threading issue and some wait calls are required? I’m not sure.
  • This still isn’t 100% accurate. The overlap detection that I implemented is a simple straightness heuristic. In your dataset there are cases where the centromere terminal of a SC ends in direct overlap with another SC. This creates problems in determining centromere association with both skeletons. This applies to foci association as well.
  • Just to clarify, the foci distance is euclidean (and not a skeletal length)

If you have problems running the new macro, or interpreting the results, let me know.

As to the original issue of rewriting in a different language, I do believe this could offer some performance benefits (more options for rapidly finding overlap between your regions, for example) as this macro will not scale well with larger images. I would probably simply write it in Java, since the code changes required would be minimal. But still, this would only be necessary if you run into truly blocking issues with the latest version of the macro.



The problem was that the method for overlap resolving was not read out in the dialogItemChanged method. It now records nicely.