Comparison between 3D Grand Truth and 3D Stardist Segmentation

After almost a year of annotation, troubleshooting, testing, and model development, I was finally able to get my first good results using 3D StarDist :smiley:! But now I am facing a new challenge, the comparison of my grand truth and my prediction results.

What I would like to do is to be able to visualize both, the gt and the prediction on the same window to have a visual comparison of both, besides I would like to compute the volume of each nucleus and its centroids to perform a quantitative comparison of both data sets.

Until now, I have tried to use FIJI 3D Image Suit to compute the centroids and volumes but the values are very different for me to be able to say that they are correct or not. Beside, I have try to visualize my images using


but I have not been able to visualize my images or to get any output out of them, if someone can help me I will really appreciate it. :smiley: thanks in advance

1 Like

Hi, our Jupyter notebooks do a quantitative evaluation at the end of training (cf. section Evaluation and Detection Performance).


1 Like

Dear @uschmidt83m Thank you for your quick answer, I am aware of the quantitative evaluation at the end of the training, however, we would like to compute the difference in volume between the GT and the prediction made using StarDist, besides as I mentioned before I would like to have a graphical comparison between the GT and the prediction.

But talking about the quantitative evaluation at the end of the training, in my case, the evaluation shows that there are no false positives, which I consider very strange and that is one of the reasons I would like to perform another comparison. Here I show the image obtained

Hi @xgalindo,

I have a notebook that uses napari browser to compare ND segmentations, you can turn on and off the eye in the Napari browser to see how well they compare with each other. If you are aware about Napari and its usage and have it installed on your computer, this notebook should run just fine after you change the paths to Raw and Segmentation image directories.

1 Like

It’s already possible with our code, but it would probably take a while to explain it / make an example script. Since you also want to visualize it, you’re probably better served with a different tool.

There’s a simple explanation: the curves for false positives (fp, blue) and false negatives (fn, green) are identical, and you can’t see the blue curve because the green one is on top of it.

Sorry for the late reply,