0.2.3 Tiles vs Detections and how to use them

I’m playing around with using tiles (rather than cells or other detection objects) for use in segmenting tissue on non-cellular features (specifically gray matter vs white matter in brain, for example).

After I generate tiles, I need to calculate features. When I do this (Analyze > Calculate features> … > Run) I am prompted to select either “Detections” or “Annotations”. I’ve been selection Detections for this. When I then try to train an Object Classifier on these objects, the object filter treats “Tiles” separately from "Detections " which is somewhat confusing considering tiles seem to be considered detections in other places. What’s the rationale behind this? How should I be using it to be most robust?

On a related note, I’d love to create tiles of varying sizes and compare classifier performance… is there a canonical way to have tiles of multiple sizes like this within the same project, and use them to calculate features/train classifiers for comparison?

If you are really using 0.2.3 (the m5 is confusing), I would right click on the images after creating the initial annotations, Duplicate them with data, and then perform different tiling on each duplicate of the image (which does not actually duplicate the image file, so no size/space issues).

Tiles are a specific subset of detections. You might want cells handled separately from tiles, or tiles separately from cells. Cells being another type of detection.

Thanks for the response. Sorry about the m5 terminology, i’m new to this framework and I thought others posting recently had used that to clarify recent versions (if not here, in other places…).

The clarification of tiles being a subset of detections is helpful, if not intuitive.

How can I script this duplication, and then select the only duplicates for further processing? Without scripting, the solution you propose seems to require a lot of manual effort for projects with lots of images, particularly for investing a wide range of tile sizes.

For training a classifier, if I have different tile sizes on different duplicated images, how can I select tiles of a specific size to include in the classifier training process?

There is more information on the three types of objects here: https://qupath.readthedocs.io/en/latest/docs/concepts/objects.html
If you want to run something in a more automated fashion, then you will likely need to do a bit of scripting digging, as that is not a normal process. There was at least one post on creating projects or entries from the command line which might be useful, though I doubt it specifically includes the duplicate entry function. If your computer can handle it and you want more direct comparisons, you may indeed want to keep them all in the same image.

I’m afraid any time I attempted something like that before, I did not duplicate images; I wiped the detections between tests to keep QuPath from struggling and dying. I only kept summary data within the parent annotation from each run (create tiles within tissue annotation, add measurements, classify, check tiles inside of “test set” annotations to see if they matched classes. save measurements to tissue annotation, rinse, repeat). If you want to create multiple sets of tiles, you may need to break the hierarchy (another recent post… somewhere on here) so that you don’t wipe out the previous run’s tiles.

You should be able to specify groups of tiles by incrementing through runs if you choose to keep them. Create a measurement called TileRun or similar and give it a value of 1, 2 etc on each run through. All of the new tiles will not have this measurement yet, so that gives you a way to pick the new set each time (alternatively, they are usually selected after creation, I think?).