How to export only areas within annotations as whole picture or tiles

Hi, so sorry if this question was maybe already posted here, but I couldn`t really find a solution or a thread with the exact same question (I tried and looked through qupath and online for 2-3 hours).

I want to export only the area of an svs file inside my e.g. “tumor” annotation to generate tiles afterwards for deep learning that contain only tumor tissue. With scripts presented regarding qupath I managed to export an annotation as white area (tumor) within black area (healthy tissue and background) as a JPG, but I don’t see, how this should help me further in my case.

And I think using “raw” tumor tissue without masks would also make things a little easier afterwards?
Is there already a solution?

Not exactly sure what you have tried, but were you using the tile exporter with
https://qupath.readthedocs.io/en/latest/docs/advanced/exporting_images.html#tile-exporter
annotatedTilesOnly(false) // If true, only export tiles if there is a (classified) annotation present
set to true?
Note that if you want to export tumor tiles this way, you would need to have only the tumor class annotations present. Also, how it works is a little tricky and may require downstream cleanup, see:


For some helpful suggestions.

@sophia1 the main ways to export images (possibly including annotations) are summarized at

The main thing to decide is whether QuPath should do the tiling or not… which may depend partially upon whether your regions are too big to export in one go.

To export a rectangle corresponding to a single annotation, use

// Write the region of the image corresponding to the currently-selected object
def roi = getSelectedROI()
def requestROI = RegionRequest.createInstance(server.getPath(), 1, roi)
writeImageRegion(server, requestROI, '/path/to/export/region.tif')

But you’ll still probably need an associated mask, so need to look into annotation export as well… which means the LabeledImageServer.

The labels/masks are important because at some point you need to decide whether or not a tile that overlaps your annotation boundary is included in your ‘tumor’ class for training. Exporting an annotation mask alongside your image allows you to control/check this later, e.g. in your Python code.

For example, you might decide to include a tile if its centroid is in the tumour region, if > 50% is within the tumour region… etc. Personally, I think it’s better to have a simple export script in QuPath that provides the information that you can later untangle in Python according to your precise application – rather than writing a complex QuPath script that makes these decisions at the point of export.

That’s the rationale for the TileExporter and LabeledImageServer design as they currently are: they give flexible ways to export image tiles and masks for deep learning training. You can use either the TileExporter or the LabeledImageServer, both of them together, or neither if you prefer to write an alternative export script.

Thanks for your reply! - I already tried this and also with annotatedTilesOnly(true). It works, but using this I still have healthy tissue around my annotations, that I try to avoid, if possible. And yes, I definitely - as the other person mentioned in the post you added - have afterwards still many tiles that are pretty much only white.

I think an approach like this would work great, if I had the chance to also set “all tissue outside annotation” to the same color as the background or delete it and then move the rest tumor tissue it into tiles.

1 Like

Hi sophia1,
For this problem, I made a function in python that counts the pixel values in the mask png images and returns “True” if there are >10% colored pixels and “False” if there are <10% colored pixels.

I also returned whether the tiles were red (Tumor) or green (Stroma), based on the annotation (this may be different depending on your annotations).

def percent_pixels_pos(test_image):
    
    """
    Iterates over the pixel values of an image and determines whether there 
    are >10% pixels in an image, and the color of the pixels
    
    Input: Image
    Output: first, whether > 10% of pixels are colored, second the color of the pixels
    """
    
    row = len(test_image) # the total number of rows
    _r = 0 # index looping through rows
    
    # Counting number of pixels
    num_pixel_pos = 0
    num_pix = 0

    while (_r < row):
        
        col = len(test_image[0]) # for number of cols in an image
        _c = 0 # index looping through columns
        
        while (_c < col):
            num_pix += 1
            if ((test_image[_r][_c][0]) == 150):
                color = "Green"
                num_pixel_pos += 1
            if ((test_image[_r][_c][0]) == 0):
                color = "Blue"
                num_pixel_pos += 1

            _c += 1
        _r += 1
        
    percent_pos = (num_pixel_pos/num_pix)*100
    
    if percent_pos >= 10:
        return True, color
    else:
        return False, None
        

To use the function I went over all the png files and copied the corresponding jpg file to another folder if percent_pixels_pos returned true.

3 Likes