Filling Gaps in DoG superpixels in QuPath

Hi All,
I’m doing an analysis in QuPath where I make DoG superpixels, classify them with a pre-trained classifer, and then convert one of the classes to annotations. Because there are gaps between the tiles when generating the superpixels, some of my final annotations are broken into 2. I wrote a groovy script to fix this, but it’s rather slow. The script at first converts all superpixels into a single annotation and then inverts that to find the “gaps”. Then it performs the real classification and uses “Tiles to Annotation” to make annotations of the correct class. It expands those annotations by a few pixels, finds the intersection between that and the gap object, and then merges the original annotations with the intersection object. The final result is a set of annotations of the class I care about where the gaps made during DoG superpixel creation are gone.

This works for small ROIs, but on large tissues it’s rather slow. Would anyone be able to help me speed it up? The slow part appears to be expanding and intersecting the annotations.

Note: this is written for QuPath 0.2.0_m3. The annotations I’m going for are called “ducts”.


import qupath.lib.common.GeneralTools
import qupath.lib.objects.PathAnnotationObject
import qupath.lib.objects.PathObject
import qupath.lib.roi.RoiTools
import qupath.lib.roi.ROIs
import org.locationtech.jts.geom.Geometry;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import qupath.lib.roi.interfaces.ROI;
import qupath.lib.roi.jts.ConverterJTS;
import qupath.lib.regions.ImagePlane;
import qupath.lib.objects.PathObjects;

PATH_TO_CLASSIFIER = "c:\\this is my project\\whatever.qpclassifier"

//make superpixels
setImageType('BRIGHTFIELD_H_E');
setColorDeconvolutionStains('{"Name" : "H&E default", "Stain 1" : "Hematoxylin", "Values 1" : "0.65111 0.70119 0.29049 ", "Stain 2" : "Eosin", "Values 2" : "0.2159 0.8012 0.5581 ", "Background" : " 255 255 255 "}');
runPlugin('qupath.imagej.detect.tissue.SimpleTissueDetection2', '{"threshold": 220,  "requestedPixelSizeMicrons": 20.0,  "minAreaMicrons": 10000.0,  "maxHoleAreaMicrons": 1000000.0,  "darkBackground": false,  "smoothImage": true,  "medianCleanup": true,  "dilateBoundaries": false,  "smoothCoordinates": true,  "excludeOnBoundary": true,  "singleAnnotation": true}');


selectAnnotations();
runPlugin('qupath.imagej.superpixels.DoGSuperpixelsPlugin', '{"downsampleFactor": 2.0,  "sigmaMicrons": 3.0,  "minThreshold": 10.0,  "maxThreshold": 255.0,  "noiseThreshold": 1.0}');

//convert superpixels into 1 giant annotation
dets=getDetectionObjects()
dets.each{it.setPathClass(getPathClass("Superpixel"))}
selectAnnotations()
runPlugin('qupath.lib.plugins.objects.TileClassificationsToAnnotationsPlugin', '{"pathClass": "Superpixel",  "deleteTiles": false,  "clearAnnotations": true,  "splitAnnotations": false}');

//find new annotation
rect=getAnnotationObjects().find{it.getLevel()==1}
outline=rect.getChildObjects()

//invert this to find the gaps in the superpixels
gaps=makeInverseAnnotation(outline[0])

//classify superpixels as needed
selectDetections();
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 2.0,  "region": "ROI",  "tileSizeMicrons": 25.0,  "colorOD": true,  "colorStain1": true,  "colorStain2": true,  "colorStain3": false,  "colorRed": false,  "colorGreen": false,  "colorBlue": false,  "colorHue": false,  "colorSaturation": false,  "colorBrightness": false,  "doMean": true,  "doStdDev": true,  "doMinMax": true,  "doMedian": true,  "doHaralick": true,  "haralickDistance": 1,  "haralickBins": 32}');
runPlugin('qupath.lib.algorithms.CoherenceFeaturePlugin', '{"magnification": 5.0,  "stainChoice": "Optical density",  "tileSizeMicrons": 25.0,  "includeStats": true,  "doCircular": false}');
runPlugin('qupath.lib.plugins.objects.ShapeFeaturesPlugin', '{"area": true,  "perimeter": true,  "circularity": true,  "useMicrons": true}');
runPlugin('qupath.lib.algorithms.LocalBinaryPatternsPlugin', '{"magnification": 5.0,  "stainChoice": "Optical density",  "tileSizeMicrons": 25.0,  "includeStats": true,  "doCircular": false}');
selectObjects{it.getLevel()==1}  
runPlugin('qupath.lib.plugins.objects.SmoothFeaturesPlugin', '{"fwhmMicrons": 25.0,  "smoothWithinClasses": false,  "useLegacyNames": false}');

runClassifier(PATH_TO_CLASSIFIER);


//create annotations from classifed superpixels
selectObjects{p-> p.getPathClass()==getPathClass("Superpixel")}
runPlugin('qupath.lib.plugins.objects.TileClassificationsToAnnotationsPlugin', '{"pathClass": "Duct",  "deleteTiles": true,  "clearAnnotations": false,  "splitAnnotations": false}');

//Expand Ducts
Ducts=getAnnotationObjects().find{it.getPathClass()==getPathClass("Duct")}

Geometry ductGeo=Ducts.getROI().getGeometry()
expandedGeo = ductGeo.buffer(3)

//Find overlap of expanded ducts with gap region
mergedAreas=gaps.getROI().getGeometry().intersection(expandedGeo)
//Merge original ducts with intersections from above
newDucts=ductGeo.union(mergedAreas)

//create new Duct object
ROI newRoi = ConverterJTS.convertGeometryToROI(newDucts, ImagePlane.getPlane(Ducts.getROI()))
PathObject newAnnots = PathObjects.createAnnotationObject(newRoi, Ducts.getPathClass())


//remove unnecessary objects and add newly expanded ducts
toRemove=[gaps,outline[0],Ducts]
removeObjects(toRemove,true)

addObject(newAnnots)
selectObjects{p-> p.getPathClass()==getPathClass("Duct")}
runPlugin('qupath.lib.plugins.objects.SplitAnnotationsPlugin', '{}');
1 Like



Just some example images of what this fills in, based on similarly classified annotations on either side of the gap!

I wonder if something like this might work: https://gist.github.com/petebankhead/e23393125fa57fe91c67f5003cbea3e2

In that implementation, it would need to be possible to work at a resolution low enough to create a binary image (from which the distance transform is calculated).

It may be possible to avoid this step, e.g. using JTS to create a Voronoi diagram: https://locationtech.github.io/jts/javadoc/org/locationtech/jts/triangulate/VoronoiDiagramBuilder.html
I haven’t tried this yet so don’t know if/how well it works…

1 Like

So, couldn’t sleep, and had a random, semi-terrible idea. It seems to work…

Rather than let QuPath create the tiles automatically, I added a step where I tiled my own annotation… and then I selected THOSE annotations, and expanded them all by one pixel. Then I created the DoG superpixels… which no longer had the one pixel border between tiles. So if you are willing to do an extra step or two at the start, you can bypass the entire problem.

Workarounds for days…

Oh! Good that works. I meant to ask… were the gaps always there, or are they are new ‘milestone’ feature?

I think @smcardle said that they have always been there, but I have always used SLICs so I never noticed.

Wow, that’s a lot more elegant than my solution. Gives better results, too. I’ll rewrite the code to automate it and post that when it’s ready.

1 Like

Thanks! I was thinking about how all of the possible solutions were held up by only being able to adjust the hierarchy one object at a time (and with complex objects), and the time with the fewest, simplest objects was prior to having any superpixels. I suppose if you want to make the script extra adaptive, you can create a variable by getting the pixel size, and using that in the expand annotations step.
Something like this:

//def pixelHeight = getCurrentImageData().getServer().getPixelHeightMicrons()

def pixelHeight = getCurrentImageData().getServer().getOriginalMetadata().pixelHeightMicrons
selectAnnotations()
runPlugin('qupath.lib.algorithms.TilerPlugin', '{"tileSizeMicrons": 500.0,  "trimToROI": true,  "makeAnnotations": true,  "removeParentAnnotation": true}');
selectAnnotations()
runPlugin('qupath.lib.plugins.objects.DilateAnnotationPlugin', '{"radiusMicrons": '+pixelHeight+',  "removeInterior": false,  "constrainToParent": false}');
selectAnnotations()
runPlugin('qupath.imagej.superpixels.DoGSuperpixelsPlugin', '{"downsampleFactor": 1.0,  "sigmaMicrons": 1.0,  "minThreshold": 10.0,  "maxThreshold": 230.0,  "noiseThreshold": 1.0}');

One of those pixel heights is for 0.2.0m2+, and the other is for most earlier versions, I think…

Your workaround with creating my own tiles is definitely faster than before. From there, I can generate the superpixels, convert 1 class into annotations in each tile individually, and then merge all the annotations.

The problems come when I try to apply this method to a whole image. What works really well on a relatively small portion of an image kept generating a “linked hash iterator” error for the whole image. I was convinced it just meant I was doing something wrong. I tried every different method I could think of to rearrange the order in which it would do things, and nothing worked. I eventually gave up, and moved back to SLICs in m2. It now works beautifully. While I was trying to figure out what I was doing wrong, it looks like you and Pete found some similar memory issues in m3 and have already fixed them for the next milestone.

Thanks for your help! I learned a lot through banging my head against this, even if the final solution didn’t use it.

1 Like

Doh, well, thanks for letting us know. Might be worth testing out in M4!

Actually, which step was problematic? The merge? The adjustment of the tiles by one pixel?

I divided the image into ~100 tiles, expanded each by 1 pixel, and then generated and classified super pixels for each expanded tile. From there, I tried a few different things, but they all failed.

  1. Tiles -> annotations for each tile separately (single annotation), and then merge the 100 annotation objects into 1. It would crash at the merge step. By crash, I mean it would show up as “not responding” on my task manager, use a lot of memory, but never progress no matter how long I left it (up to 24 hours). Sometimes it would give me the “linked hash iterator” error in the pop-up log window, sometimes it just stopped responding with no error message.
  2. Tiles -> annotations for all tiles simultaneously. It would crash.
  3. Tiles -> annotations for groups of 10 tiles at a time, repeated 10 times. This would crash at random during one of the conversion steps.
  4. Tiles -> annotations for each tile individually while splitting the annotations, then merging the annotations afterwards. Crashed during merge.

When I tested each of these methods on a small-ish annotation (10% of the image), they would complete in minutes. But any larger and it would never finish at all.

Oh, ok, so the tiles part worked to get rid of the gap (whew), it was the merging that was problematic. Whelp, I don’t think I could script my way out of that one :slight_smile:

It might be the same general issue that has plagued other annotation abuse, where there are simply too many vertices in an image for annotation objects. Depending on what you are doing downstream, you may want a different process, or find a way to abuse detections vs annotations.

Were you planning on generating cells or something like that later, I guess? Haven’t had a chance yet to test this out, but should be able to this afternoon or tonight.