Macro shapes classification

Hi, another basic question, and thank you for the replies you guys are very supportive of us newbies.
Is there a way to train the classifier with macro pictures? For example, a blood vessel (9 mm2) as a single entity/shape is immediately recognized by the human eye as different from a stroma cell (a few microns), but if analyzed based on nuclei they may come up as the same.
The closer I saw was using the superpixels but did not quite work for that.

To answer your first question: no, QuPath does not do the type of image semantic classification you are describing here. But, combinations of pixel classifiers, super pixels, and/or object classifiers can likely get at what you need.

Could you describe a bit more what you are trying to do? We might be able to help find a solution, but need a better understanding of your problem.

1 Like

Thank you.
Big structures in my tissue, like vessels, adipose tissue, ducts, granulomas, are very easy to identify with the naked eye for “architectural” reasons (size, the combination of layers like thin endothelium with thick eosinophilic wall and big hole in the middle for example for vessels). I understand that classifications in Qupath are based on nuclei detection, and because nuclei in a stromal cell and a vessel can have similar measurements, Qpath does NOT identified them as separate types of cells/structures.
I guess I’m trying to identify architectural features also as opposed to only cytological features.

Without images it’s hard to give specific advice, but for finding large structures, I usually start with super pixels. The general process is to:

  1. Make superpixels covering your entire sample. You’ll have to play with smoothing, spacing, and regularization parameters to find ones that work well. The important thing here is for the objects to be as large as possible while still preserving the edges of your structures. It’s completely fine for a single structure to be broken into multiple little objects, but if a superpixel is spanning 2 things, no further steps will separate them.

  2. Calculate features for the detection objects. I start with allll the features (intensities with haralick measurements in every channel, shapes, cluster measurements, then smoothed features on top, sometimes with different radii) and then pare down when I have a better sense of what matters. Use measurement maps to figure this out.

  3. Train an object classifier to recognize the structures you care about.

  4. Convert superpixels into annotations with “Tile classifications to annotations”

  5. Sometimes I’ll convert these annotations to detections (through a script), and then train and run a 2nd object classifier to clean up the results even more. This is especially useful to separate two similar classes that are only differentiable based on larger context, like the presence or absence of muscles around vessels.

  6. Finally you are left with annotation objects segmenting different structures. You can run cell detection inside each of them and do whatever analysis you needed on the cells, knowing which structure they are apart of.

These steps are described in more detail here, and also here (for an older version of QuPath).

That was all for large structures. For small structures, you can follow a similar process, except using a pixel classifier instead of superpixels. Train the classifier, create detection objects, then run a second object classifier to clean up the resulting objects.

Don’t be afraid to mix and match some of these methods- a low res pixel classifier can annotate a tumor and then a super pixel classifier can find vessels inside of it, or vice versa.

1 Like

thank you very much, I’ll look into it / learn / try it.

1 Like