Assign point objects to different ROIs by overlap

Dear all,

I would like to count positive and negative cells for different manually drawn tissue regions.

In the example screenshot below I have positive and negative cells imported as two types of point objects (coming form an external image analysis workflow). I further have two tissue regions as polygon annotations. How can I assign positive and negative points by overlap to the two regions, to eventually count positives and negatives for each region? Preferably without scripting.

I tried by splitting the point clouds to single objects which assigns the point objects automatically, but then I lose their properties (name and color) to distinguish between positive and negative points.


(the displayed histology image is just a placeholder)

all the best
Christoph

I would probably convert all the points to detections using a script like this one, but include a line to carry over the class. It someone depends whether the points are actually classified, or just named for their class. Ran into that problem before in another post here about points.
image
image
Adjusted code:

import qupath.lib.roi.EllipseROI;
import qupath.lib.objects.PathDetectionObject

points = getAnnotationObjects().findAll{it.isPoint() }
//Cycle through each points object (which is a collection of points)
points.each{ 
    //Cycle through all points within a points object
    pathClass = it.getPathClass()
    it.getROI().getPointList().each{ 
        //for each point, create a circle on top of it that is "size" pixels in diameter
        x = it.getX()
        y = it.getY()
        size = 5
        def roi = new EllipseROI(x-size/2,y-size/2,size,size, 0,0,0)
        
        def aCell = new PathDetectionObject(roi, pathClass)
        addObject(aCell)
    }
}
//remove points if desired.
removeObjects(points, false)
1 Like

Hi Mike,

this looks great, thank you very much for the detailed and helpful answer! I’ll try this script and will give feedback.

Best
Christoph

When you split the points, they do indeed lose their names - but retain classifications. So if you set the classification (rather than name) of your positive and negative points then you can still distinguish them after splitting.

That said, I haven’t worked with points all that much myself. I’m not entirely sure how well they figure out where they belong in the hierarchy in v0.1.2 (i.e. whether they will always end up ‘inside’ the expected object when you split them, and remain in the right place if you add or move objects). Depending on how they behave, you might well find the convert-to-detection script a better option anyway.

1 Like

@Research_Associate
@petebankhead

Dear Mike and Pete,

thank you for your very helpful comments and code. By assigning the point clouds to classes, as you recommended, and running Mike’s modified scipt (without any additions) it worked perfectly.

This is cool, because we have now a workflow to classify cell objects in positive/negative with deep learning, using our YAPiC tool. The data comes from Axioscan slide scanner (czi files). Tiff tiles are exported and processed with deep learning to detect cells and classifiy into positive/negative. The cell object data is then imported into Qupath and can be displayed on the original czi file. It’s some workaround, but was a not so complex implementation and works perfectly for our setup.

The screenshot shows the cells (positive, negative) as displayed in Qupath.

2 Likes