Thin 3D Object Splitting

Hi
@ThomasBoudier @iarganda @Thomas_Pengo @dlegland @VolkerH @apoliti @Alex_H

I am looking for methods to split thin 3D objects. What makes this interesting, imho, is the fact that conventional distance transforms should not reliably work as a basis for a watershed, because the distance to the outside of the objects is small everywhere. Thus, one cannot find meaningful maxima identifying object centres.

It feels to me as if one needs a logic of finding the distance to the outside of the object in a direction perpendicular to the direction given the shortest distance.

Probably there are others ways, e.g. based on Hessian eigenvalues?! Curious to hear your suggestions!

My use case is that I would like to separate the central oblate object in below image:

The desired result would look like this:

Intensity raw data: intensities.tif (6.6 MB)

Binary mask (at some threshold): MetaphaseDNA.tif (3.4 MB)

Desired binary result: Object.tif (6.5 MB)

1 Like

Hi @Christian_Tischer,

I tried a simplistic approach first by opening the volumes, I used in this case the minimum filter 3D to erode the binary image, I used radX=4, radY=4 and radZ=2, I then applied a maximum filter 3D to dilate the eroded volume. Note you can also use different radii for X and Y, in this case you could use a larger radius for X.

Best,

Thomas
Snapshot

2 Likes

Hi Christian,

I’m not very experienced 3D analysis so this most likely far from optimum. The following uses the MorphoLibJ library which is a favourite of mine.

Rather than using the maxima from the distance transform as seeds for the watershed, I’ve used extended maxima of the intensities.

13%20am

As the intensity is being used,:

  • the dimmer cells will be excluded
  • the degree of pre-smoothing and the threshold in the extended maxim will control whether ROI of interest is separated from dimmer cells below

I’ve read the Frangi paper a couple of times and it always seems the right way to detect tubs and plates in 3D images. But whenever sit down and run the plugins I’m never been able to make sense of the results.

Does anyone now of some good tutorial for using Frangi Vesselness or other Hessian based methods?

Cheers,

Chris

id_orig=getImageID();

// Run some intial filtering
run("Duplicate...", "duplicate");
run("Median 3D...", "x=4 y=4 z=2");
run("Gaussian Blur...", "sigma=2 stack");
rename("close");
//run("Brightness/Contrast...");
resetMinAndMax();
id_close=getImageID();

// Make seeks for the watershed using the extended maxima
run("Extended Min & Max 3D", "operation=[Extended Maxima] dynamic=3000 connectivity=6");
id_max=getImageID();

run("Connected Components Labeling", "connectivity=6 type=[16 bits]");
run("Set Label Map", "colormap=[Golden angle] background=Black shuffle");
id_seed = getImageID();
rename("seed");

selectImage(id_max); close();
selectImage(id_seed); 

// Make a gradient image for the watershed
selectImage(id_close);
run("Morphological Filters (3D)", "operation=[Internal Gradient] element=Ball x-radius=4 y-radius=4 z-radius=4");
id_grad = getImageID();
rename("grad");

// Make a mask
selectImage(id_close); 
run("Duplicate...", "duplicate");
setAutoThreshold("Default");
setOption("BlackBackground", true);
run("Convert to Mask", "method=Default background=Dark");
rename("mask");
run("Invert", "stack");
id_mask = getImageID();

// Run the watershed
run("Marker-controlled Watershed", "input=grad marker=seed mask=mask calculate use");
id_water = getImageID();
run("Labels To RGB", "colormap=[Golden angle] background=Black shuffle");
id_water8 = getImageID();

if( true ){
	selectImage(id_close);close();
	selectImage(id_water);close();
	selectImage(id_mask);close();
	selectImage(id_seed);close();
	selectImage(id_grad);close();
}
3 Likes

I just googled and found another related publication:

They have a nice figure:

Generally, the point is to creatively combine the Eigenvalues to distinguish different shapes. Although there are some general approaches suggested (e.g. in above publication), how to mathematically exactly do this could be project specific.

Hessian eigenvalues are typically key features for machine learning (trainable segmentation) based segmentation approaches (e.g. in ilastik). The nice thing here is that the computer can learn quite complex combinations of the Eigenvalues to achieve the best result for your specific project. The disadvantage of course it that the exact math becomes somewhat of a black box.

The https://imagej.net/ImageScience update site (FeatureJ) has very efficient implementations for computing Hessian eigenvalues in 3D. I think the best way to learn more is to play with this on different input data and get a feeling of how the Eigenvalues behave in different part of the image. Then you can maybe even invent your own way of combining them :slight_smile:

2 Likes

I tried your suggestion but for me it does not seem to split the objects.
Maybe I am doing something wrong?

open("/Users/tischer/Desktop/MetaphaseDNA.tif");
run("Minimum 3D...", "x=4 y=4 z=2");
run("Maximum 3D...", "x=4 y=4 z=2");

I also don’t fully understand the rationale. An opening operation should only work if the connection between the objects is slightly thinner than the width of the objects, isn’t it?
I think in my problem this is not really the case, or at least I cannot rely on it, because the DNA in the non-dividing cells are a quite large balls than can be connected with the oblate metaphase plate in a relatively thick connection. That is, if I use a rather low threshold.

I thus guess the approach the @evenhuis suggested might be the right direction, i.e. making more use of the fact that the intensity at the connection point is probably always dimmer than in the object centres.

However, I am still curious if one could manage to the split the objects purely based on morphometry (i.e. after binarization with a relatively low threshold).

Hi @Christian_Tischer,

Sorry, I tried many combinations, the one that is actually working is radX=8 radY=4 and radZ=2, with this you take into account that the object is thinner in the Y direction.

Best,

Thomas

1 Like

Hi Christian,

I have a method for local adaptive thresholding that seem to work on this type of images. The method computes individual thresholds for each object in order to optimize the ellipse fit, given that criteria for size and major and minor axes are fulfilled. The method is described in the following paper: Ranefall P., et al. (2016) Fast Adaptive Local Thresholding Based on Ellipse fit, Proceedings of the International Symposium on Biomedical Imaging (ISBI’16), Prague, Czech Republic. I have 2D implementations for CellProfiler and ImageJ, and a 3D implementation for ImageJ that can be downloaded from: http://user.it.uu.se/~peran517/downloads/POE/.

I used the following settings to segment the desired object in your volume:
run("PerObjectEllipsefit3D ", “minsize=20000 maxsize=1000000 ellipsethr=0.5 minmajoraxis=20 maxmajoraxis=100 minminoraxis=5 maxminoraxis=20 minmajorminorratio=1 maxmajorminorratio=10 minpeak=0 darkbkg outputfile=”);

Feel free to contact me if you have further questions about the method.

Regards,
Petter Ranefall

2 Likes

I have a similar problem at the moment and have also played around with various ways of finding seed-points to then perform watershed or morphological reconstructions. The results are quite good but I doubt that I will ever get perfect object-splitting results, in particular when more than two objects are touching.

The particular user needs very accurate object counts though. Therefore I am looking for a good GUI where a user can take an existing 3D-label image and manually specify a few more seed points in label that should be further splitted.

MorpholibJ has the label editor, but the label editor doesn’t have a splitting operation. Is that something that would be easy to add @dlegland, @iarganda? Or is anyone aware of such a label editor?

Splitting in 3D is hard to do manually, that’s why it is not available in the Label Edition plugin. The simplest solution would be to go directly to the initial segmentation and produce smaller labels you can later merge, or play with the initial seeds. How did you get the first result?

@iarganda
In my case (separate issue from Tischi’s original post) I have implemented the standard approach of applying a distance transform to the thresholded image, and finding local maxima exceeding a certain dynamic in the distance transform to find seed points (implemented in Python/scikit-image not Fiji/MorpholibJ). The label volumes are then exported as tif files.

While I can tune the sensitivity for finding seed points somewhat by lowering thresholds for local maxima or smoothing the distance transform the seed points will not be perfect as there are sometimes dense clusters of 3 or 4 objects where the distance transform is simply not good enough.

In these cases splitting by hand would be the way to go. The way I imagine this involves a GUI that allows manual placing of seed points on an existing label. This existing label is then converted to a binary and resegmented using a watershed (or morphological reconstruction) from these seed points.
The tricky part is the user interface.

1 Like

What about importing the seed points as point ROIs and using the Interactive Marker-controlled Watershed plugin?

Thanks, I will have a look at this, but it is probably not the solution for the particular user I need this for who has to manually correct 100s of volumes that requires a very streamlined workflow. Also, I do not want to perform a new watershed on the whole image but only split a single label further.

I know a student made a version of the Morphological Segmentation plugin a few years ago with that option, you might try to contact her: https://github.com/L-EL

2 Likes