Using the cell detection and SLIC super pixel together


I am interested in analyzing a complex tissue and have found the cell detection and then object classifier to be quite successful. However, in my tissue there are regions where the nuclei are sparse and increasing the cell expansion only causes more miss identification downstream. Also, i want to eventually quantify all of the space on the tissue, not just that what is near a nuclei. It seems to me that running the cell detection and then running the SLIC (or similar process) to “fill in” the remaining space with objects could be useful for the detection classifier. Can this be done?

I have attached an image of my tissue and an annotated version of the 4 areas i want to be able to eventually classify.

Any help is greatly appreciated!

If you are using 1.2 then I would look into some of the posts on converting SLICs into annotationa, then running the cell detection within those classified annotations. Each annotation area would get you the tissue area measurement, and you could run cell detections within each.

If you are using M9, I would replace the SLICs with the pixel classifier.

I have the most recent version, are there any resources on how to use the pixel classifier and cell detection in the same work flow?

The pixel classifier alone does not work as well as the cell detection alone in determining my regions. I suspect is has something to do with the nearby cell count parameter included in the smoothed features when i train the object classifier. Is there a similar mechanism in the pixel classifier?


Without the full images it would be difficult to give specific advice with the pixel classifier, but it does seem to struggle with context in a large scale at times since 8x is the largest pixel size it can use without downgrading the initial pixel size (so you lose texture information). With your sample you need both fine details and large area context which is… difficult. Have you tried using a larger pixel size and 2x 4x 8x pixels in the pixel classifier?

If your cell segmentation and classification is working, you might also be able to use one of Pete’s scripts which creates annotations based on the class of cells nearest to any given pixel.

It is highly likely this script will not work as is in the newer versions, but you might be able to adjust it using Pete’s list of changes to scripting here:

Note the warnings, and you would probably need to run the cell detection a second time after creating the areas, if I recall correctly.

Another option is using SLICs, where you can smooth the features. Other people I have talked to also have had better luck creating accurate regions using that kind of workflow.

Thanks, for the info. When i am using pixel classifier, would you say it is helpful to have more or fewer classifications? ie Cell level classification or tissue level classifications.

I am currently using it with varying levels of success. Would you recommend pairing the pixel classifier with a subsequent object classifier?

Here is a google drive link to the full res image.

1 Like

Not sure what you mean, the types of classifiers you use depend on your desired data output. I was thinking area by tissue type (pixel or SLIC classifiers) followed by cell detection… if you need cell detection.

If you want to subdivide into more classes, I generally like to do that just so I can see where the classifier is getting confused more easily (while training). Afterwards I can always reassign all of the sub-class annotations to the full class. Say I wanted only Muscle as a class, but I had smooth and skeletal. I would start with both a smooth and a skeletal class, but then reassign the training labels to “muscle” after I got everything working.

Alternatively, if you only need area measurements, and the cell classifier works well, use the outline of the classified cells to generate your areas (script mentioned in the last post). You wouldn’t use the two methods together though.

I had a little bit of time and took a quick shot at it. I can see some difficulties splitting some of the region types… though I am not 100% certain my training areas were accurate as the tissue merges interestingly in some places.
Yellow muscle
green tissue
purple immune cells? I ignored loose clusters and kept dense ones.
orange bone “other”
red… something else. “tumor” (not actually tumor)

Training areas.

Classifier with Advanced options “reweight samples” checked.

Ending up with

Biggest issue was the crossover between muscle and whatever the red in the middle of the bone is.

Hmm, Kinda like the SLICs better, though. Thanks to @smcardle for bringing this up yesterday.

Still not perfect, but I might have overly smoothed the measurements and I wasn’t quite clear on some of the tissue types.

WOW, OK, so clearly QuPath can do exactly what i want, i just have to push the buttons and turn the knobs better.

I was thinking i would use the pixel classification to identify blue for nuclei, pink 1 for muscle, pink 2 for bone, ect, and then run an object classification to identify the larger tissue sections. But it is clear you do not have to do this.

I do like the SLIC better, it has less noise and probably with some refinement will be even more specific. Did you calculate intensity features for each tile? and smooth those features? Do you find utility in the Harlick features? Also, when picking the algorithm for classification (Random Forest, ANN ect) do you have any advice? Do you just try all of them see which one works the best?

Basically, the end goal would be to get the big green region in the SLIC image, measure the area (which should be trivial) and count cells within it. Then measure the area of the pink and red to control for amount of tissue on the slide. BTW this is a mouse knee with inflammatory arthritis (analogous to human Rheumatoid Arthritis). the Green is the infiltrating lymphocytes and proliferating synoviocytes/fibroblast (the size and number of cells here is a measure of arthritic severity), the pink is the growth plates, meniscus and articular cartilage on the femur and tibia’s, the red is mature bone and the purple is bone marrow

Thanks so much!

1 Like

Thanks for providing the image and examples of what you were looking for! That always makes it easiest to help quickly.

Yes, I calculated intensity features (after refining the H&E vectors in PreProcessing), added shape features to get each SLIC’s size, though that might not matter, and then smoothed. SLICs were 50um in size and the smoothing I think I went for 100um. The smoothing might have been a bit too large based on some of what I saw. Haralick features can be important, but one thing to look for is which ones define your areas. Set the opacity to 50%, and run through the features in the Measurement-Measurement Maps tool to see where color changes indicate one feature or another is useful for separating your classes of tissue, if you want to dig into it.

I normally find that the Random Forest works better if I can refine which features are actually important. If I am throwing everything and the kitchen sink in a giant mass of over 100 features, I often find the ANN doing better. But in the end, it’s up testing. In case it helps, there is more info on features here:

Though it is largely based on 0.1.2/3.