QuPath, Is it possible to count positive area in 'cytoplasm'

I’m tinkering with QuPath 0.2.0 trying to create a protocol that’s simple enough to hand off to end users to apply to their slides, but I’m completely new to it and not sure if what I’m trying to do is possible, though its conceptually simple. I can segment nuclei and get a good expanded ‘cytoplasm’ region around them, but I want to then threshold and count the fraction of positive area within the ‘cytoplasm’ expanded region. Using the subcelluclar detection function will give me this for the overall cell, is there a way to get at this divided by subregion? There are other ways of doing this obviously, but it would be really nice to work directly off the slides.

If you can provide some examples (raw data is best), I might be able to take a look, but for the moment, subcells are your easiest option. Pete does have a script that classifies them by whether their centroids are within the nucleus or not, but you would need to make sure they are well split up for that to be effective (one large blob that happened to have it’s centerpoint in the nucleus, for example). Depending on version, there are various ways of getting the Intersect of two ROIs, or Subtracting them. You could perform this on each individual subcellular detection vs it’s cell’s nucleus, removing the nuclear area. It would likely be very slow without some clever coding.

If your cells are not evenly spaced, be careful about how this type of classification, or cell math, is impacted by tumor vs stroma, or the edge of various structures. It might look like the edges of your densely packed tumor or tissue slice have a lower percentage positive area simply because the cytoplasm can expand further out into empty space without hitting another cell.

Thanks for the thorough reply, the possibility for bias from crowding makes sense, but isnt an issue here these aren’t even real ‘nuclei’, they’re plaques that are pretty sparse. I really appreciate the offer of help, tho I don’t have permission to release example data at the moment. As you note the problem is I need ‘cytoplasm’ spot area /‘cytoplasm’ area. spot area/ ‘cell’ area seems straight forward. It sounds like from your answer this would be possible with a reasonable level of scripting? I wouldn’t mind investing in digging into things if the API has the hooks to do exactly that, as long as once written the script would be user friendly to run. Again the issue isn’t so much calculating this number some way, which I could do with exported ROIs. I’m trying to come up with a simple workflow (if its doable in a reasonable amount of work) for a couple tasks to put a protocol in user hands rather than my (or someone else) being responsible for all the bookkeeping of doing it for them. Thanks again!

*Sort of.

And by that I mean, the technical aspects of what it sounds like you want to do can all be combined into a giant script and Run for Project. I would make sure that you build it in such a way that all important variables are at the top if you want another user to… well, make use of it. Bulbs go dim, antibodies change lot, etc etc, and so whatever thresholds you are using will need to be editable.

That said, here is a post on ROI intersection from a while back, though things are changing rapidly and I no longer remember which version of QuPath this was for or if it will work now. I have a feeling this was old.
Another link that I am pretty sure involves subtraction of two ROIs/areas.

My guide has links to further coding examples.

This guide has an example of building a script for a project and exporting results, but has nothing to do with ROI editing except for a few bits in the tissue detection where I close holes and remove small bits.

Another post with examples of scripting resources.

Thanks, looks like a really useful place to start, I’m not even so much thinking about a one click thing as letting users run the segmentation themselves and then scripting the building of a report that computes those area intersections on every ‘cell’ so it looks like it would be mostly familiarizing myself with internal representation of the detection hierarchy. I’ll start poking at it. Thanks again.

1 Like