Pixel Classification to Object Detection

Hey QuPath Community,

I am training a pixel classifier to recognize vascularity in histology slide images in QuPath. I have reached out before for help on this however it is a very difficult task to complete as I am realizing after working on it for about a week now.
Previous classifier results:

Updated Classifier Result: (not the same image location)

What I am thinking is that I use the create objects tool to generate annotations (or detections) then classify those so they can be counted for data analysis.

The problems I am currently having are:
When creating objects(or detections) it is not properly defining the edges of the vascularity. It will select them but include all loose cells in the current selection as well which leads to one detection when there are many more. This is a problem when I want to measure how many veins are in a certain area because it will only detect the one or two very large images. (This is also troublesome because the range of area that these can be is from 400 microns^2 to 150,000 microns^2) so limiting the size on this will not help with the current situation.
This is the result:

Any and all help is greatly appreciated! Thanks!

If you can provide a raw image or access to a raw image - along with some expected results within that image, that might help others interested in similar problems.

Your pixel classifier may be too high resolution if you have too many gaps or spikes.

Also, have you tried superpixels as recommended by @smcardle ?
And, of course there is always the Deep Learning route if nothing else is working - which may well be the case if you do not have a single stain for your area of interest. As long as context is important (stain in these areas is important, but stain in these other areas is not!), DL will likely be the best and possibly only option. Or re-work your experiment with a more accurate stain using immunohistochemistry.

DL might require a collaboration or a lot more coding. Fast.AI is a good resource if you want to dig into that. Alternatively, if you/you institution have access to Visiopharm, that might be a deep learning option, of a sort.

Hey @Research_Associate thanks for the quick reply.

I have looked into the super pixel route but what I gathered from that I wouldn’t be able to detect very large objects. The range of area for these objects greatly exceeds any tile size that would be usable for detecting these in the tissue. Please correct me if I am wrong!
Here is the raw image:

and here is another area of interest without too many additives to the raw, the annotations are training selections not generated from the pixel classifier.

That is still a jpeg and a snapshot, not a raw image - so nothing tested on it will be incredibly accurate but it is a start. See Exporting images — QuPath 0.2.3 documentation to export full resolution subsections, or, if possible, post the whole image.

Superpixels are just big pixels, and size of the superpixel has no impact on the size of the object you want to create from them. The same with the pixel classifier you are using - all of the objects you are creating are much bigger than a single pixel.

Another note, your training regions are almost certainly far too large. You are not training QuPath to find objects, just pixels.

This was the result using just 5 lines, indicated in red, for training. I was able to pick up the reddish purple areas quite easily - but figuring out which of those are desirable and which are not is more of a feature issue, or perhaps a second classifier only within those objects - not something that the first pixel classifier will be able to do.

Circularity or other measurements could be used to classify them. Alternatively, if you need to further split objects, you train another classifier within the first classifier to find edges that could be used to split closely packed objects.

Overall, I suspect you are trying to do too much in one step.

1 Like

Thank you for all your insights! The image I am using is 600+ MB and in an .svs format so it won’t let me upload it to this website but could get it to you another way if you would like. Here is a .tif of the whole slide if this is of any interest. SC.tif (6.9 MB)
and of the smaller section.

The way you separated out the sample from the whole image is fantastic. What were the steps you took to create those selections for the larger objects?
Do you think I should give super pixels another shot? @Research_Associate

1 Like

Whole slide images can be hosted through GoogleDrive, FirefoxSend, and other free sharing services - all you need is the link in your post.
The annotation lines I drew above were representative of the two primary kinds of normal tissue I saw in that image, I made sure to draw them across nuclei and other types of features. At some point I made sure the line started or ended near an “edge” to give the pixel classifier some idea of what a border should look like. Borders are going to be the hardest because two classes of pixels are going to be nearby.
Much of this will look different since it was done on a JPEG without pixel sizes:
The two channels were Hematoxylin and DAB since I did not rename them - I did set the color vectors though.
In advanced options I did make sure to reweight samples as I did not spend any time balancing out my classes for equal representation. The three classes were essentially other tissue, vasculature, and Ignore* for whitespace. I probably should have made Tissue be Tissue* so that tissue annotations would not be created, but I thought about it too late.

Superpixels give you far more control over measurements, though perhaps less over exact borders. You can make use of all of the Calculate Features menu options to add additional information to the superpixels, and even include information about the surroundings through use of tile based measurements.

As for whether or not it will work for your case, no idea.

1 Like

Thank you very much with all the suggestions and explanation!! Here is the link to the full image, it is 605MB just a warning on the size.

Would expect to get something like this from a pixel classifier first run - though there is a good chance a classifier like this is overfitted to the current image.

It also might not be too bad manually drawing them with the wand tool when the staining pattern is clear.

1 Like

I’ve found that detecting veins is accomplished well with Plugins>Segmentation>trainable weka segmentation.

Your selections in Trainable Weka Segmentation are made much like the wand manual selection above, though you don’t see the selection until after you train.

Here’s a you tube on how to do it.


Let me know if you need any help at ron_despain@hotmail.com

Looking at the name above, the video was probably this one: Trainable Weka Segmentation 1080p - YouTube
I do not think the MP4 will upload.

Weka may have trouble with full sized images, though it does have many more pixel measurements than QuPath’s pixel classifier.

That’s the site.

Please reply from your email, as I can’t respond except to sign in to ImageSC

@Research_Associate What feature did you use to get those “Select features” tab? I have trained my classifier pretty well and am now trying to get the results you did.

Edit is highlighted in blue next to Features - that is where the majority of your control over the classifier is.
The features I selected are the ones shown, along with the scales and color vectors as listed above.