Reclassify objects by shared perimeter

Greetings,

Is there a way to reclassify objects (detections/annotations) below a certain size to the class of the object that they share the greatest fraction of their perimeter with?


In the example above, I’d like to reclassify any annotations with an area less than 20 000 um^2 to the class that the object shares the highest fraction of it’s border with. In this case, the highlighted annotation with class ‘Necrosis’ would be reclassified to ‘Tumor’, and merged into the larger annotation.

Perhaps this is by intent, in which case does anyone have any recommendations as to how to reclassify small objects into their best neighbor (shared perimeter, closest proximity, etc.)?

I don’t have the code handy, but I would probably use a findAll to get the small annotations, expand them X distance, intersect them with all other annotations using JTS so as not to destroy the original annotation (maybe all annotations over a certain size), and keep the class of the largest-by-area intersection. Apply that class to the original annotation.

At the end of running through all of the small annotations, select all annotations “per class” and merge them.

I’m having a bit of trouble with expanding and setting to the class of the greatest intersection. Currently, I can select objects below a certain size threshold, and assign them to a class:

import static qupath.lib.gui.scripting.QPEx.*
////////
//Variables to set
max_object_size=20000 //Max size of annotations to reclassify, in micrometers
class_name='Tumor' //class to set small objects to
////////
//get resolution
double pixelSize = getCurrentImageData().getServer().getPixelCalibration().getAveragedPixelSize() 
//convert micrometer area into pixels
size_thresh_pix=max_object_size/Math.pow(pixelSize,2) 
//select small annotations
def smallAnnotations = getAnnotationObjects().findAll {it.getROI().getArea() < size_thresh_pix} 
//reclassify all small objects to this class
def selected_name = getPathClass(class_name)
smallAnnotations.each {it.setPathClass(selected_name)}
//Merge all objects of same class (optional)
//annotations = getAnnotationObjects().findAll {it.isAnnotation() && it.getPathClass() == getPathClass(class_name)}
//mergeAnnotations(annotations)

The reason I’m trying to do this is to measure:

  • the area of necrosis in the image. The pixel classifier I’m using to generate the annotations yields small fragments of misclassifications throughout the image (even at the lowest resolution listed in the drop-down menu)
  • mean intensity of DAB staining in the viable tumor area

Given that others haven’t made similar posts, I feel like I’m missing something… Is there a way to set the resolution for a pixel classifier to a value not listed in the drop-down menu? Or is this a case for using superpixel segmentations?

If you only have two classes, Necrosis and EverythingElse, then your easiest option is what you found, take anything too small and swap it’s class.

I don’t know of any way to force different values for the pixel classifier without rebuilding. It would be nice to have greater context at times, but for those cases I have used superpixels. Superpixels were also easier to manipulate based on their surroundings since you could write something to the equivalent of “check all SLICs within X microns, if the current class is different from the most common class, and the most common class is 70% of the objects, change class.”

Otherwise, try to improve the classifier =/

1 Like