Validation steps in stardist two doubs

Hi all,

with my colleague @constantinpape we are trying to create a model for automated cell counting of hight density tissue images with stardist.

I perform the annotations with Qupath and he performs the training using the annotated images.
Now we are at the validation step.

The annotations are performed in a total of 1 channel (Neun) 10 images with the size of 513x513.
I have two questions

  1. in the validation can I use images with different size compared to that used in the training, is there some roles or recommendation about this?

  2. the prediction obtained whit the model we havee is not very good because stardist is predicting always less cells compared to the real number. Maybe the problem in the annotations.
    -when I annotated the images I try to use the same intensity level the reason was that I want to set a threshold for my eyes, but what about stardist? How this program see the intensity? Maybe is using a lower intensity and for this is predicting less cells.

Once trained, a StarDist model can be applied to images with different pixel dimensions. However, the channels must not change and also the sizes of the objects to be segmented should be comparable to those in the training data.

There can be several reasons. First check that your validation images are properly normalized. You can also try to decrease the probability threshold to obtain more predicted objects, likely at the expense of more false positives.

Does that mean you don’t annotate dimly visible cells? If so, that could explain why StarDist is predicting less cells that you want it to be. Depending on how @constantinpape trained the model, data augmentation was used to change brightness and contrast of the input images to be robust against such changes. Hence, please annotate all nuclei that you can see in the image.


@uschmidt83 answer to your question: I annotate all the nuclei, everything including dimly visible cells.

@Mariya_Timotey_Mitev I’m not sure if you’re validating this by using the model directly in QuPath for various images and checking them by eye. If so, there are two things to keep in mind:

  • The brightness/contrast settings in QuPath should not affect the detection in any way
  • When using StarDist, the normalization applied in QuPath can be adjusted within the detection scripts – and this can impact your results. The default may not be best. See StarDist in QuPath Normalization issue for some more info about normalization complications and alternatives.
1 Like