Exporting threshold coverage from Qupath

I am attempting to create a masked image that looks something like this (see uploaded image below) where one color designates positive staining (such as staining selected by a threshold) and the other color is just empty or ignored.

I am curious if I can export just the threshold overlay on my image to a program like imageJ where i can then save the image and it will exist in the simple form as seen above?

I know that i could create a mask/binary image that represents positive thresholding in ImageJ quite easily, but I would like to keep the thresholding/annotation part in Qupath if possible as i like the annotation ability to ignore the thresholder’s false positive on staining artifacts. In the image below i have a large outer region and then i annotated smaller areas that i want my thresholder to ignore.

So far I have been able to make an annotation around my tissue of interest, then use my created thresholder to “create objects” and then export that annotation with those objects into imageJ. I am now stuck at this point because the objects that i create are hollow and i’m not sure how i would select just the objects to create a black and white image.

Any suggestions would be awesome, thanks so much!

  1. If you want to ignore the background, it would probably be best to classify it as Ignore* so that no object is created there.
  2. Maybe create the masks directly? https://qupath.readthedocs.io/en/latest/docs/advanced/exporting_annotations.html#binary-labeled-images
  3. Export what you see, but you will probably need to downsample for ImageJ to open the image…
    File menu has:
    image
    Exported png on the left.

There is another option: click the rightmost button at the bottom of the pixel classifier window – with the vertical ellipsis icon – and choose Save prediction image.

2 Likes

Thank you both for you help, I really appreciate your time!

A question for Pete- what you described is really really close to what I want, but when I save the Tiff OME if applies the threshold to the entire slide and exports the entire slide is there a way to just export my annotation?

Here is what is saved and then opened in imageJ -

If it is at all possible I would love it if I could just export my threshold with those annotations (seen in the image below) that are used to ignore artifacts. Even in the full slide image (seen above) the exported thresholder doesn’t ignore my annotations when the saved prediction image is opened in ImageJ.

Thanks again everyone! Please let me know if more images or any other info would be helpful

@mcgeedev can you explain why exactly you want to do this? It sounds like what you want is very specific, in which case it will probably require a script.

My suspicion is that using QuPath to generate annotations, and then exporting the annotations masks or labelled images (using the documentation linked to above) is the most sensible method. But I might not understand well enough precisely what your goal is… and what you’ll do with the image in ImageJ.

@petebankhead
I need to use qupath to open my large full slide scans. But with all the features you’ve added it seems like qupath can do so much more and potentially act as a 1-stop-shop for much of our QUINT workflow (if you are not familiar you can see https://www.frontiersin.org/articles/10.3389/fninf.2019.00075/full to get a general idea of what we’re doing). Essentially I am registering one of my brain sections to an atlas and then using that registration to identify what regions we have staining in.

If i can, I want to replace the ilastik step (in the diagram pictured below) with just some basic stain detection with manual annotation to exclude artifacts and some edge effect. I can use the subsequent threshold mask I create in the later steps of comparing stain to registered region.

Being able to open my image, annotate tissue folds-artifacts-etc, and then export thresholding masks that can be used in our workflow would be a great way to have our workflow move in a smoother quicker way.

I will reread the documentation above too to make sure I didn’t overlook that as my best option!

Thanks again Pete!
Devin

It sounds like you want the holes you made in the annotation to be treated as background in a binary image, while the rest of the mask stays.
You probably will want to have one annotation/class for your bounding box, use your thresholder to Create an annotation within that bounding box, and then edit the Threshoder-created annotation with your holes/manual edits. Then you can export the bounding box using the thresholder-annotation to define your mask.

Thanks @mcgeedev, nice paper! It looks like a very nice workflow to solve a complicated problem. Hopefully QuPath can indeed help streamline it even further.

I still don’t entirely understand the goal of the export (i.e. in what format things need to be for the next input stage), but I presume from the screenshot that at least one issue is that multiple sections are arranged on the same 2D slide and so you need to separate these out during export.

It seems that you can either

  1. export the predictions from the QuPath classifier
  2. export the QuPath objects generated from the predictions of the classifier

If you choose option 1. you need to crop the exported region. Here’s a script that can do so (assuming you are using a project in QuPath and have a selected object)

def imageData = getCurrentImageData()
def classifier = loadPixelClassifier('classifier name')

def predictionServer = PixelClassifierTools.createPixelClassificationServer(imageData, classifier)

def path = buildFilePath(PROJECT_BASE_DIR, 'prediction.tif')

def roi = getSelectedROI()
def downsample = predictionServer.getDownsampleForResolution(0)
def request = RegionRequest.createInstance(predictionServer.getPath(), downsample, roi)
writeImageRegion(predictionServer, request, path)

If you use 2. then you have potentially more options… which includes having a lot more control over precisely what is exported, and in what format – including filtering out regions based upon size/shape criteria.

See for example https://qupath.readthedocs.io/en/latest/docs/advanced/exporting_annotations.html#individual-annotations
where you can replace

for (annotation in getAnnotationObjects()) {

with something more selective, e.g.

def unclassifiedAnnotations = getAnnotationObjects().findAll {it.getPathClass() == null}
for (annotation in unclassifiedAnnotations) {

Alternatively, you can avoid exporting raster images entirely and use GeoJSON to export the classification results instead (although presumably this will involve changing other code in your pipeline).

@Research_Associate

It sounds like you want the holes you made in the annotation to be treated as background in a binary image, while the rest of the mask stays.

yep exactly!

Then you can export the bounding box using the thresholder-annotation to define your mask.

Can you describe specifically what you mean by this? I want to make sure i’m not misunderstanding

Thanks again for all your responses and help!

@petebankhead

Hi Pete,

This is great! One thing that I would like to add to that code if I can is that it excludes* the staining in my annotations as seen in the picture below

As of right now my output includes it like this

Is there a good resource for me to do this? My googling left me with just half ideas. I was having a hard time on how I am supposed to differentiate between a region of interest (which is called by the code) and the annotations that I made which I combined with the original rectangle annotation.

Thanks again for all your help and responses!

If you were to manually draw a bounding box, you could assign it class “Bounding”. Then for your export script, you would cycle through all annotations of class Bounding and use the standard LabelImageServer from the readthedocs site. Pete describes how to do this above in the part about “more selective, e.g.”

Then have the label server only apply fill for certain classes that do not include “Bounding”, so that class is ignored.

So the steps would be

  1. Create bounding boxes
  2. Run pixel classifier to create annotations within each bounding box (scripted for all bounding boxes)
  3. Manually edit the created annotations to exclude areas
  4. Export each bounding box so that it marks your annotations as a label, and leaves the rest as background.

Step 1 could be eliminated entirely if you go high throughput by creating bounding boxes automatically using the dimensions of the objects generated by step 2. However, since you are going through and editing manually anyway, you may as well manually control this part as well.

Ah I see, ok thanks! I have been dipping my toes into the coding/scripting aspect of Qupath but it has been quite intimidating so far as I have no experience with groovy and little with programming in general. I think I will devote some time to diving deeper though this week. Thanks again for all your help, again it’s really appreciated! @Research_Associate

1 Like

This is actually very similar to another project where someone was building a 3D image out of serial sections on the same slide. They were exporting the actual image though. The pixel classifier was used to create the borders of all tissue slices, then a bounding box was created around each slice, and the resulting objects were ordered left to right, then top to bottom. The final step was exporting each image at full resolution and labeling it by it’s position/order on the slide. If that is similar to what you are doing, I can see if they mind sharing the code to what we did.

Ah interesting! I think our goal is simpler than that as we just need a masked binary image that is the same size as our bounding box. We aren’t too worried about order or 3d as of yet

1 Like