When detecting “stuff” in large areas detection is being conducted in “tiles”. I guess this is how it must be done because of memory issues. However, detections along the edges get trunctuated while they in reality belong in two or more tiles. This is perhaps most apparent when using superpixels.
Yes, with cell detection there is some degree of overlap between the tiles and then the ‘largest’ detection is retained when multiple detections are found for the same pixels (under the assumption it is the ‘least-clipped’). This doesn’t work perfectly and could be improved, but should be better than making no effort at all.
However there is no overlap when using superpixels. This because it the size of the superpixels is rather harder to predict, and they aren’t inherently meaningful shapes anyway. Merging superpixels across tiles is also likely to produce weird results and I can’t see an easy solution to this. Also, every pixel needs to be unambiguously assigned to a particular superpixel.
An alternative superpixel implementation might do this job better, but I think that the pixel classifier is likely to replace superpixels for many applications… and for me the pixel classifier is a higher priority. With that in mind, the truncated tiles is a limitation of my superpixel implementation that isn’t likely to go away soon.
How does one go about setting up a pixel classifier?
Pixel classifier requires the Milestone version of QuPath update. You can find the function under the Classify menu.
You will need to have at least two annotated classes to use it, the Ignore class and Region class are special cases (Ignore regions are not included in calculations and those pixels are ignored, Region class is not used at all), and it currently cannot be scripted to “run for project.” All things to keep in mind when you are using it or deciding whether to use it for your project.
Is it possible to do the cell detection on whole of the slide then add tiles as annotation without deleting the cell detection objects
Currently when I tried cell detection on whole of the slide then introduced tiles as annotation it deletes the objects while if I draw it as object its OK it takes into consideration the previously detecting cells
When creating tiles maybe an option should should be keep existing cell detection
You cannot, but you can store the cells separately and then load them back in. You will want to modify the script here to only save and load. Split it into one script for each, and ignore/delete the rest.