Nucleus Estimation Region in QuPath

Hi everyone,
I’m using QuPath detections to create an estimation region on where the nucleus is based on its area and center of mass. I’m currently using this to help with ground truth comparisons, but I see it may be helpful for any deep learning algorithms that uses estimated background regions, area, etc. I’m assuming perfect circularity to get the dimensions of the rectangular annotations for each detected nucleus. It’s a simple script and probably not the most efficient, but I’d love to know what you think and how I can expand on this idea :slight_smile:
The script is in .groovy format implemented in QuPath-m8.

import qupath.lib.regions.*
import ij.*
import java.awt.Color
import java.awt.image.BufferedImage
import javax.imageio.ImageIO
import static qupath.lib.gui.scripting.QPEx.*
import qupath.lib.objects.PathObjects
import qupath.lib.roi.ROIs
import qupath.lib.regions.ImagePlane
import java.lang.Math.*

// Read RGB image & show in ImageJ (won't work for multichannel!)
double downsample = 1.0
def server = getCurrentImageData().getServer()
int w = (server.getWidth() / downsample) as int
int h = (server.getHeight() / downsample) as int
def img = new BufferedImage(w, h, BufferedImage.TYPE_BYTE_GRAY)
//def img = new BufferedImage(w, h, BufferedImage.TYPE_3BYTE_BGR)  //Replace for RGB Image 
def plane = getCurrentViewer().getImagePlane()
selectAnnotations()
clearSelectedObjects()
def g2d = img.createGraphics()
g2d.scale(1.0/downsample, 1.0/downsample)
g2d.setColor(Color.WHITE)
//g2d.setColor(Color.RED) //replace to fill RED detections

//create annotation with the center of mass point at its center
for (detection in getDetectionObjects()) {
    roi = detection.getROI()
    double cx = roi.getCentroidX()
    double cy = roi.getCentroidY()
    double area = roi.getArea()
    r = Math.sqrt(area/Math.PI) 
    double rx = cx - r
    double ry = cy - r
    int x = Math.floor(rx) 
    int y = Math.floor(ry)
    def roi = ROIs.createRectangleROI(x, y, 2*r, 2*r, plane) 
    def annotation = PathObjects.createAnnotationObject(roi)
    addObject(annotation)
}

//masking the annotations 
for (annotation in getAnnotationObjects()) {  
  roi = annotation.getROI()
  def shape = roi.getShape()
  g2d.fill(shape)
}
g2d.dispose()
new ImagePlus("Mask", img).show()
def name = getProjectEntry().getImageName() //+ '.tiff'
def path = buildFilePath(PROJECT_BASE_DIR, 'Centroids')
mkdirs(path)
def fileoutput = new File( path, name+ '-maskred.png')
ImageIO.write(img, 'PNG', fileoutput)
println('Results exporting...')

1 Like

Adjusted script formatting to be copy and paste-able.

1 Like

Revisited this and wanted to point out a couple of things!

Detection in detection objects will cycle through all cells if the cell expansion is turned on, and not necessarily get just the nucleus. Not sure if that is what you want, but what you use there will depend on what type of cell detection you use. Detections is accurate with 0 cell expansion since the cells are no longer… cells.

If you do have cells, you can use getCellObjects().each{ it.getNucleusROI() } to get just the nucleus.

To expand on that, you could get the actual bounding box for the nucleus rather than a fixed square, if you wanted.
An example defining the “bounds” of an ROI from Pete, here.

def roi = getSelectedObject().getROI()
print([roi.getBoundsX(), roi.getBoundsY(), roi.getBoundsWidth(), roi.getBoundsHeight()])

I’m also not sure you need to create annotations, since you only need the ROI to create the white spots. It looks like you are getting an ROI, creating an annotation from it, and then getting the ROI again from the annotation. It probably slows things down significantly to create all of the annotations.

1 Like

Thank you very much for taking the time and looking over this. I really appreciate it!

With the information about the bounding box I won’t need to assume perfect circularity for a more accurate estimation region! I’ll try to revise it.
Also you’re very right, once detections reach into the thousands it does take longer to generate all the annotations. With my limited knowledge in QuPath scripting I’m currently only familiar with filling in masks with annotations/detections. The image below describes how I wish to visualize the differences in detections with different parameters. This is the only image I have currently so it’s REALLY not ideal. But areas of overlap (pink) without broken contours would indicate to me that individual nuclei detections agree despite changes in parameters.

1 Like