Going back to the original idea, I think there are two main ‘simple’ metrics you can get:
- density of positive cells (either expressed as a positive %, or as a number per mm2)
- hotspot area
I’d say that you can’t meaningfully get both: either you need to keep the density fixed and measure area, or keep the area fixed and measure density.
If you allow both to change, then your density can pretty much always be made higher by decreasing the area, because then you can choose only the very hottest bit of any ‘hotspot’. Taken to an extreme, you’d get a lot of hotspots, each with 100% positive cells, if you were to treat each individual positive cell as a distinct (very small) hotspot.
Previously, I suggested it would be possible to get the hotspots with the highest density for a given area.
You could also go the other way: get the hotspots with the largest area having a minimum fixed density. Basically, you’d need to calculate the local density for every pixel in the image* and identify hotspots as being clusters of pixels where this density exceeds a threshold. Now the area of that hotspot might be meaningful, but the density is not so meaningful because it depends upon whatever density threshold you chose.
I don’t know enough about the underlying hypothesis or what exactly you are working on to be confident this is relevant, and it doesn’t pay any attention to type of any other cell (e.g. if the potential hotspot is anywhere close to the tumor).
Anyway, it’s something to consider. I’m sticking with my view that the definition of a hotspot is troublesome and non-obvious. One day I’ll try to write the code to implement both approaches, but for now other work is calling…
*-At some manageable resolution… perhaps 10μm per pixel.