Tips to improve 3D cell segmentation results in Fiji?

We’re trying to segment and count EdU-stained cell nuclei in a stack of images and we’re not quite happy with the results to date. We’re convinced they can be improved, but don’t know how to go about it. After watching many videos (there’s great material online!) and reading posts in this forum, we’ve set the following pipeline:

  1. Background subtract
  2. Trainable Weka Segmentation 3D
  3. Convert results to 8-bit grayscale, fill holes and apply Gaussian blur 3D.
  4. 3D watershed split.

After segmenting, we tried several filters. Erosion is definitely not helpful in our case. I’m not sure Gaussian Blur is helping much since we must use very low sigma values (about 1). The issue is that we must keep the shape of our blobs with several cells stuck together for the watershed to be able to split them. We tried retraining the algorithm in the Weka segmentation several times, but don’t know how to further improve the outcome. We have also tried playing around with the parameters for the watershed split. For example, using the probability map from Weka Segmentation as seeds doesn’t seem to help. Our best results are with automatic seeds and a radius of 25 px (that’s approx the size of a cell).

It’s our first time trying to segment cells. We would really appreciate tips from more experienced users on how to improve our results. Specifically, how can we get the “big blobs” to split into separate cells?

Note that this post is related to our previous query, but I’m opening a new post because we solved the problem with the “floating pixels” and our question now is not exclusively related to the Weka segmentation any more:

Here is an overview of the pipeline and results we’re getting to date. I marked with green circles cells that were split correctly, in orange cells that were split more or less ok, and in red “big blobs” that failed to split. Any ideas on how can we get these to split?


(Ignore the horizontal red line in the central image)

The original masked image stack can be found here:

If you’re up for using some Python notebooks to compute the actual 3D segmentations, then StarDist (https://github.com/mpicbg-csbd/stardist) currently has some of the best results in the community. There are some new alternatives like CellPose and StarFinity, but StarDist is the best supported at the moment and should work well for your data.

~Kyle

1 Like

As @kephale already indicated, stardist might work to disentangle those nuclei. To use stardist, you would however need to manually annotate some crops of your data (e.g. 3-5 stacks of size 32x128x128). Afterwards, you can use our example notebooks to train a stardist model.
Good luck! :slight_smile:

1 Like

Thanks @kephale @mweigert. Indeed, stardist looks promising for the type of images we’re working with. We’ll look into it for sure. Your tips are greatly appreciated!

We were hoping, though, to improve the results for this image stack with our current pipeline in Fiji, it seems like we’re so close to getting that segmentation right… should we really just give up with Weka and move to stardist?