I’m playing around with using tiles (rather than cells or other detection objects) for use in segmenting tissue on non-cellular features (specifically gray matter vs white matter in brain, for example).
After I generate tiles, I need to calculate features. When I do this (Analyze > Calculate features> … > Run) I am prompted to select either “Detections” or “Annotations”. I’ve been selection Detections for this. When I then try to train an Object Classifier on these objects, the object filter treats “Tiles” separately from "Detections " which is somewhat confusing considering tiles seem to be considered detections in other places. What’s the rationale behind this? How should I be using it to be most robust?
On a related note, I’d love to create tiles of varying sizes and compare classifier performance… is there a canonical way to have tiles of multiple sizes like this within the same project, and use them to calculate features/train classifiers for comparison?