2D -> 3D Integer labels

Hi there,

I have done a slice by slice segmentation of a 3D stack giving me integer labels for each stack and now I want to do something like a nearest neighbour search to combine the 2D labels in 3D to create a 3D object. Is there something existing available for that? Just a simple Nearest Neighbour Tracker in a certain image dimension would do in creating such a 3D label. Python or Fiji, both would work in this context but not other software tools as I would like to keep the user interface constrained in these two tool space at the moment.

I uses this script for similar purposes:

3d_analytics_CH.py

3 Likes

Here comes a CLIJ2 version delivering an indexlist mapping points from two point lists according to nearest neighbors:

run("CLIJ2 Macro Extensions", "cl_device=");
Ext.CLIJ2_clear();

// define to images with labelled individual pixels
Ext.CLIJ2_pushArray(input1, 
	newArray(0, 1, 0, 0,
	         2, 0, 0, 0,
	         0, 0, 3, 0), 4, 3, 1);

Ext.CLIJ2_pushArray(input2, 
	newArray(0, 0, 1, 0,
	         2, 0, 0, 4,
	         0, 3, 0, 0), 4, 3, 1);

// determine point coordinates
Ext.CLIJ2_labelledSpotsToPointList(input1, pointlist1);
Ext.CLIJ2_labelledSpotsToPointList(input2, pointlist2);

// determine a distance matrix showing distances of all points to all points
Ext.CLIJ2_generateDistanceMatrix(pointlist1, pointlist2, distance_matrix);

// crop out a distance matrix which ignore the background
Ext.CLIJ2_getDimensions(distance_matrix, width, height, depth);
Ext.CLIJ2_crop2D(distance_matrix, distance_matrix_without_background, 1, 1, width, height);

// determine closest points; similar to a minimum-y projection
n_closest_points_to_find = 1;
Ext.CLIJ2_nClosestDistances(distance_matrix_without_background, distances, indexlist, n_closest_points_to_find);

print("Indices for closes points:");
Ext.CLIJ2_print(indexlist);

With this output:
image

Let me known if I can help integrating it in your workflow!

Cheers,
Robert

1 Like

Hello,
Thank you for the answers, I first tried the script of sebastian with some modifications but it did not really work as I expected. For Robert I would have to code it in which is ok I was just asking if something already exists that works out of the box which can work for the kind if image I am attaching. It is an integer labelled image with few Z slices and I want to in a way recolor the image in Z by stiching together the nearest neighbour regions as a single color per 3D object n then view it as 3D object in Napari perhaps.Forum.tif (16.0 MB)

1 Like

Hi @kapoorlab,

you could use the matching function from stardist to match instances from one 2D slice to the other and the just relabel those that exceed a certain threshold, e.g. like so

from stardist.matching import matching  
res = matching(img[0],img[1], report_matches=True)

print(res.matched_pairs)
print(res.matched_scores)

It will return the matching indices res.matched_pairs (which you could use to relabel the new slice) and the IoU scores res.matched_scores (which you could use to filter out spurious matches).

3 Likes

What was the unexpected result from my script? I basically used function from MorphoLibJ, so I would be interested in what could be improved.

Thx, sebi06

I have a good feeling about this, I will try this and let you know how that feeling went :). Thanks a lot.

Hi Sebastian,

It didn’t really do any nearest neighbour linking, just gave me Z stack with labels in one Z not linked to the label in next to make a 3D object, there wasnt a threshold for linking in that script. I did see that it was from morpholibJ but it wasn’t doing nearest neighbour linking was it? And the part I modified was to remove watershed from the scrip because I am already inputting integer labelled image so I do not need to do any watershed or connected component on that image.

But maybe I did something wrong and you can try it with the example image I uploaded and see if it links 2D objects to make 3D?

Hi,

It works by combining all planes (segmented for example slice-by-slice earlier) to create 3d objects. Splitting is optional done by a 3d watershed afterwards and the output is a z-stack where every object has a unique color (aka labels) and measures things like volume etc.
The input planes must be all binary images from some kind of segmentation. Maybe this is the misunderstanding?

Could be yeah, I guess this is it. But my input in each input plane is integer label and not binary, so for my purpose creation and splitting of 3d objects should be just one step than two and that would need a threshold to cut off linking in 3d.
Yeah my dataset is just different than what your script requires as an input.

In the end this code snippet worked for me

def RelabelZ(previousImage, currentImage,threshold):
    
    
    relabelimage = currentImage
    waterproperties = measure.regionprops(previousImage, previousImage)
    indices = [prop.centroid for prop in waterproperties] 
   
    if len(indices) > 2:
       tree = spatial.cKDTree(indices)
    
       currentwaterproperties = measure.regionprops(currentImage, currentImage)
       currentindices = [prop.centroid for prop in currentwaterproperties] 

       if len(currentindices) > 2:

           for i in range(0,len(currentindices)):
               index = currentindices[i]
          
               currentlabel = currentImage[int(index[0]), int(index[1])]  
               if currentlabel > 0:
          
                      previouspoint = tree.query(index)
                      previouslabel = previousImage[int(indices[previouspoint[1]][0]), int(indices[previouspoint[1]][1])]
                      if previouspoint[0] > threshold:
                              
                             relabelimage[np.where(currentImage == currentlabel)] = currentlabel

                      else:
                             relabelimage[np.where(currentImage == currentlabel)] = previouslabel

  

    return relabelimage 
2 Likes

Hi @kapoorlab,

Thanks for that code snippet.
I liked your approach with the KDTree on the centroids, so I had a play with it as I sometimes need this functionality as well. I noticed that your code snippet does not work correctly in all cases. If two slices don’t contain the same number of objects, the combined slice can end up with multiple objects that contain the same label index.
I did a quick and dirty fix by relabelling one image such that both images have non-overlapping label sets before combining them.

You can find a notebook with my experiments and code changes here.

Here is a visual summary:

raw volume:

slice-by-slice segmentation with StarDist:

Your code snippet applied to whole stack in sequence (some labels are merged correctly, but labels indices jump):

Your code snippet with minor changes:

3 Likes

Yup that is the final hammer blow! Awesome, thanks.