\(•ᴗ•)/ QuPath scripting (#2): Using CluPath and IJ to show image with additional local threshold channel

Hi, this is a second QuPath script related to
@kitcat 's post How to normalise or subtract background in QuPath with DAB only stained sections?

and a follow up of \(•ᴗ•)/ QuPath scripting (#1): Using CluPath to save smoothed image regions

This script is designed for visual inspection of the local threshold output.

The script add two additional channels to the RGB image currently displayed in QuPath:

  • channel 1: Red original
  • channel 2: Green original
  • channel 3: Blue original
  • channel 4: Absorbance Red
  • channel 5: Local Threshold (Phansalkar) of Red

This modified image is displayed temporarily in a new project entry.
The temporary image disappears if the image is (re-)loaded.

The main parameters can be adjusted in the ‘Parameter’ section of the script.

// Gauss smoothing parameter
double gaussSigma = 1.0

// Auto Local Threshold (Phansalkar method)
double k = 0.125
double r = 0.5
double radius = 10.0

The most import one is the parameter k. It directly influences the threshold charcteristics.
More information regarding the Local Threshold Phansalkar can be found on

and
https://blog.bham.ac.uk/intellimic/g-landini-software/auto-threshold-and-auto-local-threshold/

The Phansalkar method is implemented in this script by stacking clij functions. This can be replaced by a clij local threshold version in the future.

The ImageJ local threshold plugins could have been used instead of clij functions. In this case be aware that in the IJ Phansalkar implementation a histogram stretching is involved. This can have a strong influence onto the threshold result.

USAGE:

  • Open a RGB image in QuPath (single viewer)
  • Run the script
  • Inspect the image and the new channels with the Display>Brightness/Contrast tool
    (optional)
  • Add a second viewer
  • Load the orignal image into this second viewer
  • Synchronize both viewers to compare the additional channels with the original image

Here is the script file showAdditionalChannels_Absorbance_LocalThreshold_0.2.txt (15.3 KB)
(download file and change file extension to .groovy)

Here is the relevant part of the script:

// ******  Parameters  *******

// Gauss smoothing parameter
double gaussSigma = 1.0

// Auto Local Threshold (Phansalkar method)
double k = 0.125 //
double r = 0.5
double radius = 10.0


// ******  Script  *******

Project project = QP.getProject()
//def imagelist = project.getImageList()
//def entry = imagelist.get(0)

def entry = QP.getProjectEntry()

def name = entry.getImageName()
print name + '\n'

def imageData = entry.readImageData()
def currentServer = imageData.getServer()

// Create channel list from existing channels
def channels = []
channels.addAll(updateChannelNames(name, currentServer.getMetadata().getChannels()))
// and add additional channels
channels.add(ImageChannel.getInstance(name +"_ALT", ColorBrown))
channels.add(ImageChannel.getInstance(name +"_ALT", ColorMagenta))
println 'n channels: ' + channels.size() + '\n'

// Create the new server, the new image & add to the project
def server8bit = new TypeConvertServer_AddChannels_clij_ij(currentServer, channels, gaussSigma, k, r, radius)
def imageDataCreated = new ImageData<BufferedImage>(server8bit)
imageDataCreated.setImageType(ImageData.ImageType.FLUORESCENCE)

Platform.runLater {
    // Create new project entry for the new channels ...
    // (use FLUORESCENCE image type to have access to the single channels directly
    def newentry = ProjectCommands.addSingleImageToProject(project, currentServer, ImageData.ImageType.FLUORESCENCE)
    BufferedImage thumbnail = ProjectCommands.getThumbnailRGB(server8bit)
    newentry.setThumbnail(thumbnail)
    newentry.setImageName("My New Image")
    QPEx.getQuPath().openImageEntry(newentry)

    QPEx.getQuPath().refreshProject()

    QPEx.getCurrentViewer().setImageData(imageDataCreated)

    QPEx.getQuPath().openImageEntry(project.getEntry(imageDataCreated))

    QPEx.getQuPath().refreshProject()

    // .. or display the new channels in the current viewer of the original image instead
    //QPEx.getCurrentViewer().setImageData(imageDataCreated)
}

println 'Done!' + '\n'

// ****  end of script  ****

// Prepend a base name to channel names
List<ImageChannel> updateChannelNames(String name, Collection<ImageChannel> channels) {
    return channels
            .stream()
            .map( c -> {
                return ImageChannel.getInstance(name + '-' + c.getName(), c.getColor())
                }
            ).collect(Collectors.toList())
}

class TypeConvertServer_AddChannels_clij_ij extends TransformingImageServer<BufferedImage> {
    ImagePlus imp
    CLUPATH clijx
    float[] pixelsCLIJX

    double gaussSigma
    int extNP
    double k, r, radius
    float[] absorbance

    ImageServer<BufferedImage> currentserver
    private List<ImageChannel> channels
    private ImageServerMetadata originalMetadata
    def cm = ColorModelFactory.getProbabilityColorModel32Bit(channels)

    ArrayList<ColorTransformer.ColorTransformMethod> colortransformation = new ArrayList<ColorTransformer.ColorTransformMethod>()

    TypeConvertServer_AddChannels_clij_ij(ImageServer<BufferedImage> server, List<ImageChannel> channels,
                                       double gaussSigma, double k, double r, double radius) {
        super(server)
        currentserver = server
        this.channels = channels
        this.gaussSigma = gaussSigma
        this.k = k
        this.r = r
        this.radius = radius

        // Number of pixels of region extension
        extNP = Math.max(3, (int)(3 * gaussSigma))
        extNP = Math.max(extNP, Math.ceil(radius))

        this.originalMetadata = new ImageServerMetadata.Builder(currentserver.getMetadata())
               //.pixelType(PixelType.FLOAT32)
                .pixelType(PixelType.UINT8)
                .rgb(false)
                .channels(channels)
                .build()

        colortransformation.add(ColorTransformer.ColorTransformMethod.Red)
        colortransformation.add(ColorTransformer.ColorTransformMethod.Green)
        colortransformation.add(ColorTransformer.ColorTransformMethod.Blue)

        absorbance = new float[256]
        absorbance[0] = 255.0
        for (int n=1; n<256; n++) {
            absorbance[n] = -100.0*Math.log(n/255.0)   // scaling to ensure absorbance<=255 => * -46.0
        }

        // CLIJX
        clijx = CLUPATH.getInstance("GeForce")
        print(clijx.getGPUName() + '\n')
    }

    public ImageServerMetadata getOriginalMetadata() {
        return originalMetadata
    }
    @Override
    protected ImageServerBuilder.ServerBuilder<BufferedImage> createServerBuilder() {
        return currentserver.builder()
    }
    @Override
    protected String createID() {
        return UUID.randomUUID().toString()
    }
    @Override
    public String getServerType() {
        return "My 8bit Type converting image server"
    }

    public BufferedImage readBufferedImage(RegionRequest request) throws IOException {
        String path = request.getPath()
        int ds = request.getDownsample()

        int xReq = request.x
        int yReq = request.y
        int wReq = request.width
        int hReq = request.height
        int zReq = request.getZ()
        int tReq = request.getT()

        ImageRegion region = ImageRegion.createInstance(xReq, yReq, wReq, hReq, zReq, tReq)
        RegionRequest requestExt = RegionRequest.createInstance(path, ds, region)

        def img = getWrappedServer().readBufferedImage(requestExt)
        def raster = img.getRaster()

        int nBands = raster.getNumBands()
        int w = img.getWidth()
        int h = img.getHeight()

        SampleModel model = new BandedSampleModel(DataBuffer.TYPE_BYTE, w, h, nBands + 2)
        byte[][] bytes = new byte[nBands + 2][w*h]
        DataBufferByte buffer = new DataBufferByte(bytes, w*h)
        WritableRaster raster2 = Raster.createWritableRaster(model, buffer, null)

        GaussianBlur gb = new GaussianBlur()

        float[] pixels
        float[] pixels2 = new float[w*h]
        FloatProcessor fp = new FloatProcessor(w, h, pixels2)
        float[][] pixelsChnX = new float[1][w*h]

        int[] rgb = img.getRGB(0, 0, w, h, null, 0, w)

        for (int b = 0; b < nBands; b++) {
            pixels = ColorTransformer.getSimpleTransformedPixels(rgb, colortransformation.get(b), null)

            // IJ
            fp.setPixels(pixels)
            gb.blurFloat(fp, gaussSigma, gaussSigma, 2.0E-4D)
            pixels = (float[]) fp.getPixels()

            if (b == 0) // 0 is Red channel
                System.arraycopy(pixels, 0, pixelsChnX[b], 0, pixels.length)

            // Add the original RGB channels
            raster2.setSamples(0, 0, w, h, b, pixels)
        }

        // here: only Red channel is used to derive additional channels
        for (int b=0; b<1; b++) {

            // Create and add absorbance Red channel
            for (int k=0; k<w*h; k++)
                pixels[k] = absorbance[(int)pixelsChnX[b][k]]

            raster2.setSamples(0, 0, w, h, nBands + 0, pixels)

            // Add Channel: Local Threshold

            // CLIJX: generate GPU memory buffer
            ClearCLBuffer input = clijx.pushArray(pixelsChnX[b], w, h, 1)
            ClearCLBuffer input2 = clijx.create(input.getDimensions(), clijx.Float)
            ClearCLBuffer clbMean = clijx.create(input.getDimensions(), clijx.Float)
            ClearCLBuffer clb1 = clijx.create(input.getDimensions(), clijx.Float)
            ClearCLBuffer clb2 = clijx.create(input.getDimensions(), clijx.Float)
            ClearCLBuffer clbtmp = clijx.create(input.getDimensions(), clijx.Float)

            // Auto Local Threshold (Phansalkar method) see: https://imagej.net/Auto_Local_Threshold
            // see code in:
            // https://github.com/fiji/Auto_Local_Threshold/blob/59f319b59e000f70d348577c388fe02188250f39/src/main/java/fiji/threshold/Auto_Local_Threshold.java
            // t = mean * (1 + p * exp(-q * mean) + k * ((stdev / r) - 1))

            double p = 2.0
            double q = -10.0

            // Calculation of the Auto Local Threshold (Phansalkar method)
            // 'Smooth' and 'Normalize' Input
            clijx.multiplyImageAndScalar(input, input2, 1.0/255.0)

            clijx.mean2DBox(input2, clbMean, radius, radius)
            clijx.standardDeviationBox(input2, clb2, radius, radius, 0.0)

            // stddev part of sum
            clijx.multiplyImageAndScalar(clb2, clb1, 1.0/r)
            clijx.addImageAndScalar(clb1, clbtmp, -1)
            clijx.multiplyImageAndScalar(clbtmp, clb2, k)
            // mean part of sum
            clijx.multiplyImageAndScalar(clbMean, clbtmp, q)
            clijx.exponential(clbtmp, clb1)
            clijx.multiplyImageAndScalar(clb1, clbtmp, p)
            clijx.addImageAndScalar(clbtmp, clb1, 1)
            // combined both parts
            clijx.addImages(clb1, clb2, clbtmp)
            clijx.multiplyImages(clbMean, clbtmp, clb1) // clb1 is threshold

            // apply threshold
            clijx.greaterOrEqual(input2, clb1, clb2)
			
            // invert Result
            clijx.binaryNot(clb2, clbtmp)
            clijx.multiplyImageAndScalar(clbtmp, input2, 255.0)

            // END of Calculation of the Auto Local Threshold (Phansalkar method)

            imp = clijx.pull(input2)
            pixelsCLIJX = (float[]) imp.getProcessor().getPixels()

            raster2.setSamples(0, 0, w, h, nBands + 1, pixelsCLIJX)

            // END of Add Channel: Local Threshold

            // close GPU memory buffers
            input.close()
            input2.close()
            clbMean.close()
            clb1.close()
            clb2.close()
            clbtmp.close()
        }

        return new BufferedImage(cm, Raster.createWritableRaster(model, buffer, null), false, null)
    }
}

EDIT: Script updated

5 Likes

Next step: Use the temporary channel for cell selection

As long as the temporary channels are visible they can be used for cell detection.

With an annotation selected in the upper viewer (the viewer with the temporary data) open the cell selection and run the detection on the temporary LT channel (Local Threshold).

Use the following parameters:

Important:
Requested pixel size has to be the original pixel size.

The cell detection may take a moment. So please either use a small ROI or be patient.

Once the detections are available in the upper viewer (the viewer with the temporary data) you can transfer it to the lower viewer (the viewer with the original image) with the following script:

import qupath.lib.gui.QuPathGUI
import qupath.lib.gui.viewer.QuPathViewer
import qupath.lib.objects.PathObject
import qupath.lib.scripting.QP

QuPathGUI qupath = QuPathGUI.getInstance()

QuPathViewer upperViewer = qupath.getViewers().get(0)
QuPathViewer lowerViewer = qupath.getViewers().get(1)

// Collect detections in upper viewer
def detections = new ArrayList<PathObject>()
upperViewer.getHierarchy().getDetectionObjects().each {detect ->
    detections.add(new qupath.lib.objects.PathDetectionObject(detect.getROI(), detect.getPathClass()))
}

// Add detections in lower viewer
detections.each{detect ->
    lowerViewer.getHierarchy().addPathObject(detect, false)
}

QP.fireHierarchyUpdate()
println('Done!')

After selecting the detections in the hierarchy of the lower viewer and using Objects>Annotations...>Insert into hierachy
the detections are visible and accessible in the original image.

2 Likes

Here is a modified version of the script that adds the temporary local threshold channel.
It relies on the new CluPath version 0.4.1.24.

Modifications:

  • Switched to new clupath version clupath-0.4.1.24
  • Tile extension added to avoid abrupt changes at the tile borders
  • new clijx funtion localThresholdPhansalkar() used (instead of stacked clijx commands)

Here is version 3 of the script file showAdditionalChannels_Absorbance_LocalThreshold_3.txt (15.4 KB)
(download file and change file extension to .groovy)

Here is the relevant part of the script:

// ******  Parameters  *******

// Gauss smoothing parameter
double gaussSigma = 1.0

// Auto Local Threshold (Phansalkar method)
double k = 0.125 //
double r = 0.5
double radius = 10.0


// ******  Script  *******

Project project = QP.getProject()

def entry = QP.getProjectEntry()

if (entry == null){
    print 'No image is loaded!' + '\n'
    return
}

def name = entry.getImageName()
print name + '\n'

def imageData = entry.readImageData()
def currentServer = imageData.getServer()

// Create channel list from existing channels
def channels = []
channels.addAll(updateChannelNames(name, currentServer.getMetadata().getChannels()))
// and add additional channels
channels.add(ImageChannel.getInstance(name +"_absR", ColorBrown))
channels.add(ImageChannel.getInstance(name +"_LT", ColorMagenta))
println 'n channels: ' + channels.size() + '\n'

// Create the new server, the new image & add to the project
def server8bit = new TypeConvertServer_AddChannels(currentServer, channels, gaussSigma, k, r, radius)
def imageDataCreated = new ImageData<BufferedImage>(server8bit)
imageDataCreated.setImageType(ImageData.ImageType.FLUORESCENCE)

Platform.runLater {
    // Create new project entry for the new channels ...
    // (use FLUORESCENCE image type to have access to the single channels directly
    def newentry = ProjectCommands.addSingleImageToProject(project, currentServer, ImageData.ImageType.FLUORESCENCE)
    BufferedImage thumbnail = ProjectCommands.getThumbnailRGB(server8bit)
    newentry.setThumbnail(thumbnail)
    newentry.setImageName("My New Image")
    QPEx.getQuPath().openImageEntry(newentry)

    QPEx.getQuPath().refreshProject()

    QPEx.getCurrentViewer().setImageData(imageDataCreated)

    QPEx.getQuPath().openImageEntry(project.getEntry(imageDataCreated))

    QPEx.getQuPath().refreshProject()

    // .. or display the new channels in the current viewer of the original image instead
    //QPEx.getCurrentViewer().setImageData(imageDataCreated)
}

println 'Done!' + '\n'

// ****  end of script  ****



// Prepend a base name to channel names
List<ImageChannel> updateChannelNames(String name, Collection<ImageChannel> channels) {
    return channels
            .stream()
            .map( c -> {
                return ImageChannel.getInstance(name + '-' + c.getName(), c.getColor())
                }
            ).collect(Collectors.toList())
}


class TypeConvertServer_AddChannels extends TransformingImageServer<BufferedImage> {
    ImagePlus imp
    CLUPATH clijx
    float[] pixelsCLIJX

    double gaussSigma
    int extNP
    float k, r, radius
    float[] absorbance

    ImageServer<BufferedImage> currentserver
    private List<ImageChannel> channels
    private ImageServerMetadata originalMetadata
    def cm = ColorModelFactory.getProbabilityColorModel32Bit(channels)

    ArrayList<ColorTransformer.ColorTransformMethod> colortransformation = new ArrayList<ColorTransformer.ColorTransformMethod>()

    TypeConvertServer_AddChannels(ImageServer<BufferedImage> server, List<ImageChannel> channels,
                                       double gaussSigma, double k, double r, double radius) {
        super(server)
        currentserver = server
        this.channels = channels
        this.gaussSigma = gaussSigma
        this.k = k
        this.r = r
        this.radius = radius

        // Number of pixels of region extension
        extNP = Math.max(3, (int)(3 * gaussSigma))
        extNP = Math.max(extNP, Math.ceil(radius))

        this.originalMetadata = new ImageServerMetadata.Builder(currentserver.getMetadata())
               //.pixelType(PixelType.FLOAT32)
                .pixelType(PixelType.UINT8)
                .rgb(false)
                .channels(channels)
                .build()

        colortransformation.add(ColorTransformer.ColorTransformMethod.Red)
        colortransformation.add(ColorTransformer.ColorTransformMethod.Green)
        colortransformation.add(ColorTransformer.ColorTransformMethod.Blue)

        absorbance = new float[256]
        absorbance[0] = 255.0
        for (int n=1; n<256; n++) {
            absorbance[n] = -100.0*Math.log(n/255.0)   // scaling to ensure absorbance<=255 => * -46.0
        }

        // CLIJX
        clijx = CLUPATH.getInstance("")
        // optinal: Select a specific GPU
        // clijx = CLUPATH.getInstance("GeForce")
        print(clijx.getGPUName() + '\n')
    }

    public ImageServerMetadata getOriginalMetadata() {
        return originalMetadata
    }

    @Override
    protected ImageServerBuilder.ServerBuilder<BufferedImage> createServerBuilder() {
        return currentserver.builder()
    }

    @Override
    protected String createID() {
        return UUID.randomUUID().toString()
    }

    @Override
    public String getServerType() {
        return "My 8bit Type converting image server"
    }

    public BufferedImage readBufferedImage(RegionRequest request) throws IOException {
        int idx, idxRoi

        String path = request.getPath()
        int ds = request.getDownsample()

        int zReq = request.getZ()
        int tReq = request.getT()

        int z = request.getZ()
        int t = request.getT()

        int extNPminX, extNPmaxX, extNPminY, extNPmaxY
        extNPminX = extNPmaxX = extNPminY = extNPmaxY = extNP

        int xExt = request.x - extNPminX * ds
        int yExt = request.y - extNPminY * ds

        if (xExt < 0) {
            extNPminX = (int) (1.0 * request.x / ds)
            xExt = request.x - extNPminX * ds
        }
        if (yExt < 0) {
            extNPminY = (int) (1.0 * request.y / ds)
            yExt = request.y - extNPminY * ds
        }

        int wExt = request.width + (extNPminX + extNPmaxX) * ds
        int hExt = request.height + (extNPminY + extNPmaxY) * ds

        //print 'request: ' + ds + ' ' + request.x + ' ' + request.y + ' ' + request.width + ' ' + request.height + '\n'
        //print 'requestExt: ' + ds + ' ' + xExt + ' ' + yExt + ' ' + wExt + ' ' + hExt + '\n'

        // Create extended region to avoid edge effects of gaussian smoothing
        ImageRegion region = ImageRegion.createInstance(xExt, yExt, wExt, hExt, z, t)
        RegionRequest requestExt = RegionRequest.createInstance(path, ds, region)
        def img = getWrappedServer().readBufferedImage(requestExt)

        def raster = img.getRaster()

        int nBands = raster.getNumBands()
        int w = img.getWidth()
        int h = img.getHeight()
        int wRoi = w - extNPminX - extNPmaxX
        int hRoi = h - extNPminY - extNPmaxY

        SampleModel model = new BandedSampleModel(DataBuffer.TYPE_BYTE, wRoi, hRoi, nBands + 2)
        byte[][] bytes = new byte[nBands + 2][wRoi*hRoi]
        DataBufferByte buffer = new DataBufferByte(bytes, wRoi*hRoi)
        WritableRaster raster2 = Raster.createWritableRaster(model, buffer, null)

        GaussianBlur gb = new GaussianBlur()

        float[] pixels, pixelsRoi = null
        float[] pixels2 = new float[w*h]
        FloatProcessor fp = new FloatProcessor(w, h, pixels2)
        float[] pixelsRGBRoi = new float[nBands*wRoi*hRoi]

        int[] rgb = img.getRGB(0, 0, w, h, null, 0, w)

        // sequence B-R-G
        for (int b = nBands-1; b>=0; b--) {

            pixels = ColorTransformer.getSimpleTransformedPixels(rgb, colortransformation.get(b), null)

            // IJ blur
            fp.setPixels(pixels)
            gb.blurFloat(fp, gaussSigma, gaussSigma, 2.0E-4D)
            pixels = (float[]) fp.getPixels()

            if (pixelsRoi == null)
                pixelsRoi = new float[wRoi * hRoi]

            // Crop original region from extended region
            for (int y = extNPminY; y < h - extNPmaxY; y++) {
                idx = y * w + extNPminX
                idxRoi = (y - extNPminY) * wRoi
                System.arraycopy(pixels, idx, pixelsRoi, idxRoi, wRoi)
            }

            System.arraycopy(pixelsRoi, 0, pixelsRGBRoi, b*wRoi*hRoi, pixelsRoi.length)

            // Add the original RGB channels
            raster2.setSamples(0, 0, wRoi, hRoi, b, pixelsRoi)
        }

        // here: only Red channel is used to derive additional channels
        for (int b=0; b<1; b++) {
             // relevant content (smoothed Red channel) is still in fp, pixels and pixelsRoi

            // I: Create and add absorbance Red channel
            // relevant content is still in pixelsRoi
            for (int i=0; i<pixelsRoi.length; i++)
                pixelsRoi[i] = absorbance[(int)pixelsRoi[i]]

            raster2.setSamples(0, 0, wRoi, hRoi, nBands + 0, pixelsRoi)

            // relevant content (smoothed Red channel) is still in fp and pixels

            // II: Create and Add Channel: Local Threshold
            // relevant content is still in pixels
            // CLIJX: generate GPU memory buffer
            ClearCLBuffer input = clijx.pushArray(pixels, w, h, 1)
            ClearCLBuffer result = clijx.create(input.getDimensions(), clijx.Float)
            ClearCLBuffer clbtmp = clijx.create(input.getDimensions(), clijx.Float)

            clijx.localThresholdPhansalkar(input, result, radius, k, r)

            // Invert Result
            clijx.binaryNot(result, clbtmp)
            clijx.multiplyImageAndScalar(clbtmp, result, 255.0)

            // END of Calculation of the Auto Local Threshold (Phansalkar method)

            imp = clijx.pull(result)
            pixelsCLIJX = (float[]) imp.getProcessor().getPixels()

            // Crop original region from extended region
            for (int y = extNPminY; y < h - extNPmaxY; y++) {
                idx = y * w + extNPminX
                idxRoi = (y - extNPminY) * wRoi
                System.arraycopy(pixelsCLIJX, idx, pixelsRoi, idxRoi, wRoi)
            }

            raster2.setSamples(0, 0, wRoi, hRoi, nBands + 1, pixelsRoi)

            // END of Add Channel: Local Threshold

            // close GPU memory buffers
            input.close()
            result.close()
            clbtmp.close()
        }

        return new BufferedImage(cm, Raster.createWritableRaster(model, buffer, null), false, null)
    }
}

1 Like

Here is another script which copies the detections and annotations from one viewer to another copyAnnotationsAndDetectionBetweenImages.txt (3.6 KB)
(download file and change file extension to .groovy)

Here is the relevant part of the script:

ROI roi

QuPathGUI qupath = QuPathGUI.getInstance()

// Two open viewers expected (with original image and temporary image loaded)
// 0 - upper viewer
// 1 - lower viewer
// Change viewer if necessary
QuPathViewer sourceViewer = qupath.getViewers().get(0)
QuPathViewer targetViewer = qupath.getViewers().get(1)


sourceViewer.getHierarchy().getAnnotationObjects().each { upAnno ->
    roi = upAnno.getROI()

    def annoName = upAnno.getName()
    print 'annoName: ' + annoName + '\n'

    PathAnnotationObject annotation = new qupath.lib.objects.PathAnnotationObject(roi, upAnno.getPathClass())
    annotation.setName(annoName)

    // check if annotation exists in target viewer
    boolean annotationExist = false
    targetViewer.getHierarchy().getAnnotationObjects().each {targetAnno ->
        if (targetAnno.getName() == annoName){
            print 'Can not insert detection into already existing annotation ' + annoName + '\n'
            annotationExist = true
            // break ??
        }
    }

    if (!annotationExist) {
        targetViewer.getHierarchy().addPathObject(annotation)

        //qupath.setupViewer(targetViewer)
        QP.fireHierarchyUpdate()

        // Collect detections in source viewer (for the specific annotation)
        def detections = new ArrayList<PathObject>()
        sourceViewer.getHierarchy().getDetectionObjects().each { detect ->
            if (detect.getParent().getName() == annoName)
                detections.add(new qupath.lib.objects.PathDetectionObject(detect.getROI(), detect.getPathClass()))
        }

        // Add detections in target viewer (into specific annotation)
        detections.each { detect ->
            targetViewer.getHierarchy().addPathObjectBelowParent(annotation, detect, false)
        }

        //qupath.setupViewer(targetViewer)
        QP.fireHierarchyUpdate()
    }
}

qupath.setupViewer(targetViewer)
QP.fireHierarchyUpdate()

println('Done!')
2 Likes

Hey @phaub,

When I run the new script, I get this pop up message:

INFO: Refreshing extensions in C:\Users\USER\QuPath\extensions
INFO: Added extension: C:\Users\USER\QuPath\extensions\clupath-0.4.1.19-jar-with-dependencies.jar
INFO: Added extension: C:\Users\USER\QuPath\extensions\clupath-0.4.1.24-jar-with-dependencies.jar
INFO: Initializing type adapters
INFO: Bio-Formats version 6.5.1
INFO: Loaded extension Bio-Formats options (Bio-Formats 6.5.1) (17 ms)
INFO: Loaded extension Experimental extension (2 ms)
INFO: Loaded extension ImageJ extension (45 ms)
INFO: Loaded extension JPen extension (17 ms)
INFO: Loaded extension Processing extension (32 ms)
INFO: Loaded extension Rich script editor extension (301 ms)
INFO: Loaded extension SVG export extension (2 ms)
INFO: OpenSlide version 3.4.1
INFO: Starting QuPath with parameters:
INFO: Project set to Project: PH test-project
INFO: Saving project Project: PH test-project…
INFO: Image data set to ImageData: Brightfield (other), GFAP Ctrl 1.svs
INFO: Loading script file C:\Users\USER\IHC_ QuPath\1-showAdditionalChannels_Absorbance_LocalThreshold_3.groovy
INFO: GFAP Ctrl 1.svs

INFO: n channels: 5

INFO: Done!

WARN: Updating ID property to C:\Users\USER\Desktop\PH test\project.qpproj::8
INFO: Writing object hierarchy with 0 object(s)…
INFO: Image data written in 0.04 seconds
ERROR: QuPath exception: No signature of method: net.haesleinhuepf.clupath.CLUPATH.localThresholdPhansalkar() is applicable for argument types: (net.haesleinhuepf.clij.clearcl.ClearCLBuffer, net.haesleinhuepf.clij.clearcl.ClearCLBuffer…) values: [ ClearCLBuffer [mClearCLContext=ClearCLContext [device=ClearCLDevice [mClearCLPlatform=ClearCLPlatform [name=AMD Accelerated Parallel Processing], name=Juniper]], mNativeType=Float, mNumberOfChannels=1, mDimensions=[2301, 816, 1], getMemAllocMode()=Best, getHostAccessType()=ReadWrite, getKernelAccessType()=ReadWrite, getBackend()=net.haesleinhuepf.clij.clearcl.backend.jocl.ClearCLBackendJOCL@1697087d, getPeerPointer()=net.haesleinhuepf.clij.clearcl.ClearCLPeerPointer@275d3fbd], …]
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:70)
at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:46)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at TypeConvertServer_AddChannels.readBufferedImage(Script1.groovy:373)
at TypeConvertServer_AddChannels.readBufferedImage(Script1.groovy)
at qupath.lib.images.servers.AbstractImageServer.getDefaultThumbnail(AbstractImageServer.java:317)
at qupath.lib.gui.commands.ProjectImportImagesCommand.getThumbnailRGB(ProjectImportImagesCommand.java:692)
at qupath.lib.gui.commands.ProjectCommands.getThumbnailRGB(ProjectCommands.java:110)
at qupath.lib.gui.commands.ProjectCommands$getThumbnailRGB$0.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:139)
at Script1$_run_closure1.doCall(Script1.groovy:159)
at Script1$_run_closure1.doCall(Script1.groovy)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:263)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1029)
at groovy.lang.Closure.call(Closure.java:412)
at groovy.lang.Closure.call(Closure.java:406)
at groovy.lang.Closure.run(Closure.java:493)
at com.sun.javafx.application.PlatformImpl.lambda$runLater$10(PlatformImpl.java:428)
at java.base/java.security.AccessController.doPrivileged(Unknown Source)
at com.sun.javafx.application.PlatformImpl.lambda$runLater$11(PlatformImpl.java:427)
at com.sun.glass.ui.InvokeLaterDispatcher$Future.run(InvokeLaterDispatcher.java:96)
at com.sun.glass.ui.win.WinApplication._runLoop(Native Method)
at com.sun.glass.ui.win.WinApplication.lambda$runLoop$3(WinApplication.java:174)
at java.base/java.lang.Thread.run(Unknown Source)

and no new image comes up. I discovered that if I close the project and re-open it, then a new image pops up, however it looks the same as the original image. Is that normal?

I tried opening a single image in QuPath but when I ran the script it said “INFO: No image is loaded!”, that’s why I decided to create a project.

1 Like

Hi @kitcat

Please check the following hints, one by one:

  • I

Please make sure that in your extension directory (C:\Users\USER\QuPath\extensions) only the clupath-0.4.1.24-jar-with-dependencies.jar exists. Delete the file clupath-0.4.1.19-jar-with-dependencies.jar if this is still in the folder.

  • II

In the QuPath project double click your image to display it.
The message No image is loaded! means that no image is displayed.

. In the project you can have a single image or a collection of images.

  • III

Is this image of the same type as your test images (RGB, DAB only)?
If not, you can maybe make a test with one of the images you have uploaded.

  • IV

If steps I to III have not solved the issue, then test the first script from this post.

Note:

Yes, this is normal. Even if the execution of the script stops (as in your case) a new entry is added to the project. But the temporary data is not existing and this additional image entry looks like the original (in fact, it is the original).
Don’t worry about this behaviour. The new image entries always have to be removed before closing the project or QuPath. You will get used to it.

2 Likes

Hey @phaub, yes it worked, thanks a lot for that!

Both scripts worked wonderfully! :partying_face:

Is there a way to get area % measurements rather than the number of detections as a readout?

1 Like

Good to hear :slight_smile:

Not sure if there is a more easy way … but here is a scripts which adds ‘Area’ measurement to annotations and detections and print the total area of all detections per annotation:

import qupath.lib.gui.viewer.QuPathViewer
import qupath.lib.scripting.QP

QuPathGUI qupath = QuPathGUI.getInstance()
QuPathViewer viewer = qupath.getViewer()

// Add 'Area' measurement to annotations and detections
QP.selectAnnotations()
QP.addShapeMeasurements("AREA")

QP.selectDetections()
QP.addShapeMeasurements("AREA")

// calculated and display sum of detection area per annotation
viewer.getHierarchy().getAnnotationObjects().each { anno ->
    def annoName = anno.getName()
    def annoArea = anno.getMeasurementList().getMeasurementValue("Area µm^2")

    double totalArea = 0.0
    viewer.getHierarchy().getDetectionObjects().each { detect ->
        if (detect.getParent().getName() == annoName)
            totalArea += detect.getMeasurementList().getMeasurementValue("Area µm^2")
    }

    print annoName + ' area : ' + annoArea + ' µm² , Area of detections : ' + totalArea + ' µm²' + '\n'
}

println('Done!')
1 Like

Hi @phaub, thanks for the script!!

However I don’t know what the measurements are, it gives me a null area and area of detection.

I first thought the null area was the area of the ROI, but the value is smaller than the area of detection, so I’m not sure what it’s measuring.

INFO: null area : 6511636.2307549715 µm² , Area of detections : 6717655.411477566 µm²

INFO: Done!

But yeah I think you are right, there is no easy way, because I am not sure how I would be able to batch export all these measurements since the values are only displayed in the notes part of the script box.

1 Like

Hi @kitcat
assign a unique name to each of your annotations !!!

The script to transfer the annotations and detections
and
the script to sum the detection area
rely on unique names.

The format of the output can be changed so that you can simply copy&paste the data to a spreadsheet application.
Or the output can be written directly into a file.

But first, the result has to be correct.

2 Likes

For most measurement summaries, I tend to create them per annotation to avoid the need to know whether the user is naming, classifying, or using no method of distinguishing annotations.
Referencing one of my lists since I never remember…
detectionsInThisAnnotation = viewer.getHierarchy().getObjectsForROI(qupath.lib.objects.PathDetectionObject, anno.getROI())
Now you have a list of detections just from the current annotation rather than checking names.

The total detection area per annotation can then be added directly to the annotation measurement list as:
anno.getMeasurementList().putMeasurement("Total Detection Area - Square Microns", totalArea)
Similarly:
anno.getMeasurementList().putMeasurement("Percentage area containing detections", totalArea/annoArea*100)

Now the measurements can be obtained through Measure->Export Measurements or scripting options.

One last thing - if detections are not cleared before/during running a script, and the script is run twice, there is a good chance all of the detections could be duplicated - resulting in larger than expected detection area sums.

3 Likes

Important and helpful hints. Thanks @Research_Associate

3 Likes

Great thanks a lot @phaub and @Research_Associate!!

3 Likes