I found out there was a bit of a conceptual issue on how QuPath tiles images for StarDist segmentation.
It would appear that each tile that is sent for StarDist detection and segmentation is normalized independently, so in cases where there may for some reason be only background in a particular tile, the trained model detects a bunch of things that should not be there…
Would it be possible that when QuPath needs to tile the data, it computes the normalization values on a downsampled version of the image and then applies this to each tile, for consistency?
Otherwise, is there the possibility to provide fixed normalization values? Even if it means pre-computing them ourselves?
All the best,
See screenshot below.