Vessel section shape recognizing

Hi, I am a medical student at the University of Parma (Italy). I’m writing a thesis on the vascular patterns of glioblastomas and I’m using QuPath. In glioblastomas 2 patterns have been identified: “classic” (microvessels) and “bizarre” (which includes 3 subpatterns: glomeruloid, garland-like and clustered). My question is: would it be possible to “train” the software to automatically recognize the vessel sections as “annotations” and automatically distinguish at least the two patterns (classic and bizarre)? I think that, as far as the microvascular pattern is concerned, it is sufficient to measure the minor diameter of the shape. this is because the section may not be transversal to the vase and may therefore be stretched. the problem will instead be the bizarre pattern and its 3 subpatterns. I’m thinking of some correlation between perimeter and air, but I don’t know how to go on. Thanks

Manuel

I’m not sure I understand enough of the distinctions between classic and bizarre, so all I can say is… definitely maybe.

The machine learning classifiers exclusively work on detection objects, not annotations, although that can be confusing sometimes as various programs refer to different types of objects as annotations, and annotations are used to describe most markings by pathologists. The the case of QuPath, they are a specific object type, and while you can definitely classify them, you cannot train QuPath to classify them.

  1. Could you go into more detail as to how you are detecting these objects right now? Is that the positive pixel detection?
  2. There are a limited number of physical size/orientation/similar measurements included in QuPath, but if you can define others that you are interested in, that might be worth including for the future. Moment of inertia was one I thought might be useful.
  3. Regardless of whether you start with annotations or detections, you can inter-convert between the two with a script.
  4. The next Milestone release will have a pixel classifier (that can be saved and loaded!) that might help with creating objects as annotations or detections.
  5. I have a script somewhere that adds the feret diameters (min/max) to each detection, and can probably dig that up. Though whether and how it will work would depend on what version of QuPath you are or will be using.

Hi, thank you for the answer. I’ll try to explain you my problem. I have 100 images of different cases of glioblastoma multiforme (GBM).
Since it is “multiform”, its vascular patterns are also variable, both between different tumors and within the same tumor. Two main patterns have therefore been identified:

  1. Classic (or microvascular or capillary-like)
  2. Bizarre (which includes the subpatterns: glomeruloid, cluster and garlands).

I divided the sample into 3 areas (based on the different CD34 signal, a cytoplasmic vascular marker, but which can actually be expressed also by tumor cells). In each area, I placed 10 equal squares (FIG. A) and for each square I calculated:

Based on the Annotations that are formed from the positive pixels I would like to find a method to recognize the form, in a mood that can be classified in the various vascular patterns.

CLASSIC PATTERN (FIG.B): I think a way is enough to calculate the smaller diameter (not the larger one because some microvessels are transversely sectioned, so the diameter could be distorted). Problem: selecting the single section of the vessel, the entire annotation is highlighted. I could select it manually, but I would like a way to automatically recognize them.

PATTERN BIZZARRO:
-Garlands (FIG. C): I think we need a quantitative method based on the ratio between positive pixels and negative pixels (theoretically positive >>> negative)

-Glomeruloid (FIG.D): diameter (minor?) Very large

-Cluster: it may be difficult to detect them, in my example on QuPath they are not present. They could have the same diameter and the same shape as the classic pattern, but have a different distribution.

Thank you all

2 Likes

Thanks for the detailed reply, and I’ll take a look at this a bit more later, but I am not sure which scripts would work for you without including your version of QuPath.

I use QuPath 0.1.2. I also download the new version but i don’t learn all its new functions.

I didn’t have a lot of time to take a look at this yesterday, but here’s a start.

Convert the positive pixel areas into an annotation:

def detections = getDetectionObjects()

//change this line to subset = annotations if you want to convert ALL current annotations to detections. Otherwise adjust as desired.
def subset = detections.findAll {it.getROI()&& it.getPathClass() == getPathClass("Positive")}


// Create corresponding detections (name this however you like)
def classification = getPathClass('Positive')
def annotations = subset.collect {
    new qupath.lib.objects.PathAnnotationObject(it.getROI(), classification)
}


// Remove ellipse annotations & replace with detections
removeObjects(subset, true)
addObjects(annotations)

Script to split the single annotation that was just created into contiguous bits.

If desired, convert the annotations back into detections so that further measurements can be generated.

def annotations = getAnnotationObjects()

//change this line to subset = annotations if you want to convert ALL current annotations to detections. Otherwise adjust as desired.
def subset = annotations.findAll {it.getROI()&& it.getPathClass() == getPathClass("Positive")}


// Create corresponding detections (name this however you like)
def classification = getPathClass('Positive')
def detections = subset.collect {
    new qupath.lib.objects.PathDetectionObject(it.getROI(), classification)
}


// Remove ellipse annotations & replace with detections
removeObjects(subset, true)
addObjects(detections)

Other measurements include area and circularity from Calculate Features-> Add Shape Features or Add Intensity Features, or even other options such as the Feret diameters. The guide contains some other suggestions for features: QuPath Intro: Choose your own analysis(adventure)

Modification of the Feret angle script:

import qupath.imagej.objects.*

getDetectionObjects().each {
    def ml = it.getMeasurementList()
    def roi = it.getROI()
    def roiIJ = ROIConverterIJ.convertToIJRoi(roi, 0, 0, 1)
    def maxDiam = roiIJ.getFeretValues()[0]
    def minDiam = roiIJ.getFeretValues()[2]
    ml.putMeasurement('Feret max diameter', maxDiam)
    ml.putMeasurement('Feret min diameter', minDiam)
    ml.putMeasurement('Average diameter', (minDiam+maxDiam)/2)
    
    
    ml.closeList()
}
fireHierarchyUpdate()
print "done"

I’m less certain about the overall classification scheme. If you are able/allowed to host the full resolutions I could probably take a look, but hopefully this gives you enough to get started.

I ended up getting something like this with a quick test.

thank you very much! you have been very kind. I did a lot of testing with these scripts and I think I have enough data to be able to continue with my project. Next week I will attach my results below. Have a nice weekend!

1 Like

Good Sunday to all! This week I used the scripts you provided to me and I can confirm that they work. My difficulty is in determining which objects are separate vessels and which are part of the same vessel. Unfortunately the antibodies do not bind perfectly to the entire circumference of the wall, especially in the larger ones. This involves an underestimation of the vessel’s size. and an overestimation of the number (as they are segmented). Next week we will be testing with other markers. Good day and thank you so much! OT: a curiosity: what are the characteristics of the computers in which you use QuPath? I have a pc with 6gb RAM and i3 processor, sometimes I have some problems, especially in positive pixels, with new “milestones” it’s quite impossible to run

1 Like

32-200GB of RAM i7, at least 8 threads. Though I tend to build my own and all of that is usually not necessary. If you upgrade anything, it would probably be the RAM first, or try increasing the maximum amount of RAM available through the Help menu->Setup Options.

Hello everyone, I have a problem with setting classes.
The scripts (which you gave me) work on the positive classes.
In order, I do this:
1)I design “n” squares


2) I click on “Positive pixel detection”

3) Script that converts positive pixels into one big annotation:

def detections = getDetectionObjects()

//change this line to subset = annotations if you want to convert ALL current annotations to detections. Otherwise adjust as desired.
def subset = detections.findAll {it.getROI()&& it.getPathClass() == getPathClass(“Positive”)}

// Create corresponding detections (name this however you like)
def classification = getPathClass(‘Positive’)
def annotations = subset.collect {
new qupath.lib.objects.PathAnnotationObject(it.getROI(), classification)
}

// Remove ellipse annotations & replace with detections
removeObjects(subset, true)
addObjects(annotations)

  1. script that segments the aforementioned annotation into small annotations

import static qupath.lib.roi.PathROIToolsAwt.splitAreaToPolygons
import qupath.lib.roi.AreaROI
import qupath.lib.objects.PathAnnotationObject

// Get all the annotations
def annotations = getAnnotationObjects()

// Prepare to add/remove annotations in batch
def toAdd =
def toRemove =

// Loop through the annotations, preparing to make changes
for (annotation in annotations) {
def roi = annotation.getROI()
// If we have an area, prepare to remove it -
// and add the separated polygons
if (roi instanceof AreaROI) {
toRemove << annotation
for (p in splitAreaToPolygons(roi)[1]) {
toAdd << new PathAnnotationObject(p, annotation.getPathClass())
}
}
}

// Perform the changes
removeObjects(toRemove, true)
addObjects(toAdd)


5) script that converts annotations to detections:

def annotations = getAnnotationObjects()

//change this line to subset = annotations if you want to convert ALL current annotations to detections. Otherwise adjust as desired.
def subset = annotations.findAll {it.getROI()&& it.getPathClass() == getPathClass(“Positive”)}

// Create corresponding detections (name this however you like)
def classification = getPathClass(‘Positive’)
def detections = subset.collect {
new qupath.lib.objects.PathDetectionObject(it.getROI(), classification)
}

// Remove ellipse annotations & replace with detections
removeObjects(subset, true)
addObjects(detections)

The problem is that all detections have the same class (“Positive”), while I would like them to be different for each square in which they are contained. I could do this manually but it becomes a very long job. I need a script that automatically sets the classes when changing squares, preferably using letters (A for the first square, B for the second, etc.)
Can you help me?
Thanks

Kind of a side note, but you can use simple tissue detection, create tiles from that annotation, and then discard any tiles that are not 100% size if you don’t want edges. That allows you to use a random number generator to select your tiles.

Anyway, for classes, I would recommend building a string that has the base class as a name, and then, if possible, a number after them. You can certainly adapt it to letters (ASCII, probably) but numbers are quicker for me so I’ll start with that.


baseClass = "Tile "
i = 1
getAnnotationObjects().each{
    currentName = baseClass + i
    it.setPathClass(getPathClass(currentName))
    it.getChildObjects().each{c->
        if (c.getPathClass() == getPathClass("Pixel count positive")){
            c.setPathClass(getPathClass("Positive " + currentName))
        }
    }
    i++
}

Note that this was written for M6, and the name of your tiles or positive pixel count classes may change. Currently the class for the positive pixel count is “Pixel count positive”

Also, scripts pasted directly into a comment can’t be copied out correctly, they need to be formatted as above using
image

thank you. Alternatively, would it be possible to create a script for the first version of qupath that allows to give objects the same name (Name, not Class) of the square in which they are contained? thanks

I remember having trouble trying to edit the name, as it is generated dynamically from the information about the object (which is why it frequently is the same as the class).

yes … I also have problems with new version. It takes a lot of time for each command, so i have to use the first version. I tried your string on the M6 version, but they all remain “Tile 1” class

@Manuel_Baldinu It would be helpful if you could be more specific: which version is slow, and which commands?

Note that if by ‘latest version’ you mean anything other than m6 then my recommendation would be to try m6; I am not aware of any command in m6 that is substantially slower than any other version, but cannot fix any such issue unless it is either reported or I happen to notice it myself.

@Research_Associate There is a name property that is usually not set. It is not generated on demand. You change it with PathObject.setName('Anything')

sorry, with “new version” I was referring to M6, it’s very slow to do the “Positive pixel count”. However I think I found the way, I just need a script that converts the annotations into detections keeping the same class as the annotation. I already have this script that converts the “Positive” class annotations into “Positive” class detections: `def annotations = getAnnotationObjects()

//change this line to subset = annotations if you want to convert ALL current annotations to detections. Otherwise adjust as desired.
def subset = annotations.findAll {it.getROI()&& it.getPathClass() == getPathClass(“Positive”)}

// Create corresponding detections (name this however you like)
def classification = getPathClass(‘Positive’)
def detections = subset.collect {
new qupath.lib.objects.PathDetectionObject(it.getROI(), classification)
}

// Remove ellipse annotations & replace with detections
removeObjects(subset, true)
addObjects(detections)`

Ah ok. I admit that is a command I ignored (and frequently forget exists). It was never very well designed/scalable for large regions. Eventually it should be replaced by either the pixel classifier or simple thresholding, but it remains for now because the alternatives don’t quite replace all its functionality yet.

ok, i think i solved it. I needed a script to turn annotations into detections because I had another script that extrapolated Feret’s diameter from detections. By changing the code I can extrapolate it from the annotations. I’ll write it below if it serves:

import qupath.imagej.objects.*

getAnnotationObjects().each {
    def ml = it.getMeasurementList()
    def roi = it.getROI()
    def roiIJ = ROIConverterIJ.convertToIJRoi(roi, 0, 0, 1)
    def maxDiam = roiIJ.getFeretValues()[0]
    def minDiam = roiIJ.getFeretValues()[2]
    ml.putMeasurement('Feret max diameter', maxDiam)
    ml.putMeasurement('Feret min diameter', minDiam)
    ml.putMeasurement('Average diameter', (minDiam+maxDiam)/2)
    
    
    ml.closeList()
}
fireHierarchyUpdate()
print "done"
1 Like

Haha, yes, sorry, I didn’t include the i++ line at the end. Updated the script so that it should go Tile 1, 2, 3 etc.