Unsupervised 3D segmentation and slicing of thin brainching brain cells


I’m doing analysis of 3D image sets where I have multiple microscopic images of some brain cells. Dimensions of data are (60, 1024, 1024). I was able to visualize the 3D data in napari. I have to take out max intensity 2D projections of each individual cell.

Analysis goals

I was hoping to somehow detect each cell & make an isolated max intensity projection of it so that cells above or below it don’t overlap in projection.
Mainly, my aim is to isolate individual cells for independent analysis.


I am facing the following problems:

  • Overlap of cells occurs in the max intensity projections
    3D top-view
    3D side-view

    To resove this issue, I was also hoping to extract individual cells from the 3D image for individual processing in an unsupervised way (as I’m hoping to do this for a large batch). But I have no idea how to computationally determine the continuity of an object in 3D.
    Manually sliced 3D example:
    manual_sliced.tif (343.5 KB)

  • What have you tried already?
  • I tried running blob detection methods in skimage for each 2D layer, which couldn’t detect much cell bodies (centers). Even if I get it to work, I believe that it still won’t still account for the branching of cells.
  • I tried 3D segmentation techniques using StarDist3D but the objects are too small for it.

Another issue with my analysis that I found the solution to:

  • upon projection, the noise in all layers also adds up, making the 2D projection very noisy. So, I used NL-Means filter to filter noise each 2D layer-by-layer with parameters chosen by J-Invariant (Thanks to @emmanuelle), I’m satisfied with the denoising results, but the function returned by skimage.restoration.calibrate_denoiser runs very slow if I have to run it for whole 3D image frame by frame.
1 Like

Hi @kushaangupta !

Did you know that the non-local means function can run on 3D images? It should speed up your computation (you can run the J-invariant calibration on a fraction of the image to speed up things, but I would recommend that the subset is 3D as well).

For the segmentation, have tried hysteresis thresholding? With some manual adjustment of the two bounds it could give good results I hope. Or you could do a flood fill starting from local maxima of the image (just add all the flood fill mask together).

If the cells don’t touch each other, you can label each of them using skimage.measure.label (see the gallery example). Then you can use measure.regionprops to extract a bounding box for each cell.

If the cells touch each other this will be more complicated, you will have to find a criterion for separating them (in this case you should give more info here on how to separate cells).

1 Like

Thanks @emmanuelle !
I compared non-local means function’s performance on 3D grayscale images via the following code & found that for some reason, it runs very slow (~9 mins) compared to looping & denoising frame-by-frame (1 min). I don’t understand the reason for this much of a performance difference, since denoise_nl_means with 3D input image is looping in Cython.

For denoising via NL means, I got better results on estimating the parameters & sigma on the max intensity projection of whole original 3D image, rather than calculating them on a 3D subset (it did not remove much noise from the image).

Since calibrate_denoiser returns a partial function which is costly to call multiple times, I used denoiser.keywords['denoiser_kwargs'] to extract the best parameters & called denoise_nl_means via those parameters in layer-by-layer denoising.

float_original = img_as_float(original)  # has shape = (60, 1024, 1024)
frame = np.max(float_original, 0)

sigma_est = estimate_sigma(frame)
parameter_ranges_nl = { 'sigma': [0.0], 'h': np.arange(.8, 1.2, .2) * sigma_est,
                                        'patch_size': np.arange(2, 6),
                                        'patch_distance': np.arange(2, 7),
                                        'fast_mode': [True] }

denoiser = calibrate_denoiser(frame, denoise_nl_means, parameter_ranges_nl)
denoise_parameters = denoiser.keywords['denoiser_kwargs']
denoised_in_3D = denoise_nl_means(float_original, **denoise_parameters)  # took ~9 mins

denoised_by_layers = np.zeros(manual_sliced.shape)

for i in range(denoised_by_layers.shape[0]):  # took ~1 min
    denoised_by_layers[i] = denoise_nl_means(float_original[i], **denoise_parameters)

In above images, we can see that, while denoised_by_layers has some more noise, it loses less information of branches as compared to denoised_in_3D.

Hysteresis thresholding -> skimage.measure.label -> measure.regionprops works very well on the data. Though not completely unsupervised way to segment the cells, it’s surely the best way I’ve found yet. I’m happy with the results. Thanks again!