As part of my Ph.D. project, I have used StarDist 3D to perform some training and prediction to segment some cell nuclei from a 3D image stack or a cell organoid. I have done several tests using the Jupyter notebooks available in Github and different image data sets for training and testing generating quite good results. However, I have had some problems with the models comparison because the plots that StarDist 3D gives sometimes do not show the whole information required and I would like to know why this could be happening.
Here you can see the 2 sets of plots I obtained, the first plot was obtained using just my data, 2 fluorescent microscopy image stack of 85 frames and a size of 340x310 pixels. Where we can observe the different metrics and the tp, fn, and fp. In the second set of plots, we use the same fluorescent microscopy images but in this case, we add two of the synthetic data set provided that 3D StarDist uses to train de quick-demo model to our training data set.
I have check predictions done by each model and the second model offers better results, but I am still curious about why the Jupyter notebook gives me the second set of plots. Why does not compute the fp? and why the plots are in general not smooth as in the other model’s plots?
I hope you can help me and thanks for reading my post