Using a label image from StarDist as segmentation input to Ilastik (or other program)

Hi Everyone,

I trained my own model in Stardist and I must say it works fantastic. Now I want to use the output as a segmentation image in Ilastik to do object recognition in the other channels of my image. My problem is this: the output of Stardist is a label image, but what I need for Ilastik is a binary mask. How do I convert the label image into a mask such that separate objects are not touching?

If I do the Stardist process in ImageJ the objects (nuclei) appear in the ROI manager, but they overlap. Here’s a screenshot of the label image and below is the same image with ROI outlines.

Screen Shot 2020-10-11 at 9.29.47 AM

I notice in the QuPath docs for StarDist this comment: "Another difference is in how overlapping nuclei are handled. The Fiji plugin allows overlaps, controlled with an overlap threshold parameter.

QuPath does not permit overlapping nuclei. Rather, it handles overlaps by retaining the nucleus with the highest prediction probability unchanged, and removing overlapping areas from lower-probability detections - discarding these detections only if their area decreases by more than 50%."

I can see the advantage of overlapping ROI’s that map to the potential true dimensions of an object, but for practical purposes I need a binary mask of non-overlapping objects. Does anyone have a solution for this? Aside from abandoning the ImageJ-python ship I’m on and figuring out how to script in groovy…

Thanks for your time in advance! John

There might not be much Groovy to learn, since QuPath does the majority of the work for you and makes it available in a few lines… You’d have to combine info from the following links, or just ask if this is what you want to do

(But then QuPath also automatically measures all channels of your multichannel image and has an object classifier, so you might not have to export…)

2 Likes

Thanks for your suggestions and for your huge contribution to this field.

Agreed, I’m already using qupath scripts to stitch .qptiffs and run a stardist model and can see how user friendly it is. The specialized pathology functionality is very tempting, it seems inevitable I will learn more.

Maybe QuPath is already capable of this, but one thing I’m trying to do is train a model to markers in other channels using annotations I’ve created manually (outside of qupath). I manually annotated my dapi image and used that to create a custom StarDist model. I went further though and annotated those nuclear detections in the other channels and so I have a training set of highly validated data. Can QuPath import that and use it to train a model?

Thanks again for your time. -John

1 Like

Thanks @johnmc :slight_smile:

Pretty much anything should be possible via scripting, but whether it’s worth the effort to do it in QuPath is another matter – especially if you’ve got a pipeline working using something else.

This might be a bit relevant with regard to the import Does anyone have an importing masks script corresponding to the working export tiles script - and there are other possibly-relevant sections in the tutorials regarding both pixel and object classification - https://qupath.readthedocs.io/en/latest/docs/tutorials/index.html

However, if your goal is cell segmentation using information from multiple channels then I’m not sure QuPath’s built-in features for pixel classification will be sufficient. They might be, but converting a probability map into actual ‘cell objects’ is still a feat that would require quite some custom work for now. But I’m not sure I’ve understood exactly what you want to do with your external annotations.

I will check that out your suggestions.

Regarding external annotations, I basically manually counted my data set as a way to analyze it by the so called “gold standard”. Now that I’ve done that I have a data set that I could use to train a classifier and use that on more data I haven’t manually annotated. Stardist allows me to do that for the segmentation part of the process, now I want to extend into the other channels in my images and tag each nuclei as positive or negative for those other markers.

Does this make sense? I have a “ground truth” data set in which I have annotated where every nuclei is and I’ve annotated if it’s positive or not in other channels. The nuclei (dapi) part has allowed me to create a really nice classifier model for segmentation in StarDist, now I want a really nice classifier for the other markers. I have a training set ready to go, but I don’t a process to create a classifier from that training data (yet).

1 Like

Ah, it sounds like this is the relevant QuPath part:

1 Like

Yes, I’ve looked at that page, but I wasn’t understand how to bring in detections from another source.
I.e. for the Cell Detections part I will want to use one’s I’m getting from the stardist result. Here possibly is where scripting can bypass the GUI requirement for QuPath Cell Detection and use detections from stardist…

Same thing for the Option #2 Machine Learning process for training classifiers. Can I import my binary masks I created elsewhere (maybe make them qupath compatible), turn them into detections and use that for training? Essentially skip the part where I annotate in QuPath what is positive and negative?

Hmmm, I assumed in this scenario you would be running StarDist directly through QuPath:


If you do, then all your cells would be measured immediately.

You wouldn’t need to import a mask to train a classifier, only points corresponding to the centroids of your annotated regions – it’s likely to be easier to train a classifier using points, and they only need to fall somewhere within the corresponding cell. The points could potentially be generated via a QuPath script that takes your binary images as input – but if you prefer scripting in something other than Groovy (as many people do…) it’s probably easier to do it elsewhere.

The counting tool already has buttons to save/load points, so you could ensure yours match the same format – or importing points with a script wouldn’t be too hard if you post the exact format in which your points exist:

Basically, it should be possible to do most of what you want to do entirely within QuPath – avoiding the need to transfer between different applications. But if you’d rather run StarDist through Python and link up with ilastik, there’s not much point getting QuPath involved.

no, yes! I would be running stardist in Qupath. My question was more about if that output would directly be available to the qupath machine learning work flow. For next part I can easily find the centroids of the positive nuclei and have that available for qupath training to “load points”. Thanks, I’ll give this a try!

If I understand correctly, then yes! There’s nothing special about QuPath’s built-in cell detection, in fact it’s designed to be interchangeable with other methods as they come along in the future – as long as your cells have measurements, they can be used in a machine learning workflow (and if they don’t have measurements, these can be added).

hang on, I was confused. Yes, I will want to use stardist in qupath to classify nuclei then use a classifier I create in qupath to mark up the other channels. That’s the final goal. But… to use my pre-annotated data for the other channels I need to use the nuclei detections I based that pre-annotated data on so detections all line up between the channels. I think I can accomplish this by bring in the binary mask I manually created for the nuclei then threshold that in qupath and then import the centroid points of the pre-annotated data to train a classifier. Then, I can move to the final goal of analyzing my new data! Just clarifying in case some else besides me is confused.