\(•ᴗ•)/ QuPath scripting (#1): Using CluPath to save smoothed image regions

In this post

@kitcat published virtual slide images that contain checkerboard pattern structure on the high-resolution planes.

Instead of speculating about the meaning of this patterns I want to show here that they can be reduced or even eliminated by Gaussian smoothing while the image content and the de facto resolution is not reduced significantly.

I have used @petebankhead 's great QuPath to save smoothed regions of the images in full resolution.

This approach seems to be appropriate because the image pixel size (0.25 µm, highest resolution) is less than the Abbe limit (~0.35 µm for 40x) and therefore the sampling is close to the Nyquist condition. The approach described here can be interpreted as kind of ‘reconstruction by smoothing’.

The resulting images show that the patterns do not really interfere with the visual use of the images, as they can be easily removed without essential loss of information by smooting with clij smoothing parameter = 0.7.


RGB color display

And the resulting images show that removing these patterns dramatically improves the quality of the images for digital processing by smooting with clij smoothing parameter = 1.0.


Red color display


DAB absorbance display

The following QuPath script
(saveSmoothedImageRegions.txt (11.6 KB) download file and change file extension to .groovy)
utilizes @haesleinhuepf ‘s clij/clupath GPU support
to save smoothed versions of image regions defined by annotations.
The script is designed for the processing of transmitted light color RGB images but it can be modified to work with single channel fluorescence images.
More information can be found in the script file.

For all RGB images loaded in the project, the script saves a pyramidal OME tiff image for each region defined in the image.

USAGE:

  • Create an empty QuPath project
  • Add relevant images to the project
  • Select (multiple) annotations in each of the images
  • Change to name of the annotations according to your schema (i.e. ‘sectionA1’, ‘sectionA2’, ‘sectionB1’, ‘sectionB2’ …)

  • Load the script in the QuPath script editor
  • Configure the parameters ‘outpath’, ‘outputDownsample’, ‘gaussSigma’ in the section PARAMETERS in the script
  • Run the script in the QuPath script editor
  • Wait until finished

:slightly_smiling_face:

Question @haesleinhuepf
Currently the data is first pulled into in ImagePlus and from there to the pixels array.

...
imp = clijx.pull(result)
pixels = (float[]) imp.getProcessor().getPixels()
...

Is there a more direct and faster way to pull the result directly to the pixels array?

Here is the relevant part of the script:

// ******  PARAMETERS  *******

def outpath = "C:\\tmp\\QP_smoothed_out\\"

// OME.Tiff parameters
def tilesize = 512
def outputDownsample = 1
def pyramidscaling = 2
def compression = OMEPyramidWriter.CompressionType.J2K_LOSSY     //J2K //UNCOMPRESSED //LZW

// CLIJ gauss smoothing parameter
double gaussSigma = 1.0

// ******  SCRIPT  *******

Project project = QP.getProject()

for (entry in project.getImageList()) {
    def name = entry.getImageName()

    // Strip of file extension
    name = name.substring(0, name.length() - 4)
    print name

    def ImageData currentImage = entry.readImageData()
    def ImageServer currentServer = currentImage.getServer()

    Collection<PathObject> annos = currentImage.getHierarchy().getAnnotationObjects()

    for (anno in annos) {
        def annoname = anno.getName()

		annoRoi = anno.getROI()

		int x = annoRoi.getBoundsX()
		int y = annoRoi.getBoundsY()
		int w = annoRoi.getBoundsWidth()
		int h = annoRoi.getBoundsHeight()

		ImageRegion selection = new ImageRegion(x, y, w, h, 1, 1);

		def serverRGBsmooth = new TypeConvertServerRGBsmooth(currentServer, gaussSigma)

		def transformedServer = new TransformedServerBuilder(serverRGBsmooth)
		def myServer = transformedServer
				.crop(selection)
				//.rotate(RotatedImageServer.Rotation.ROTATE_90)
				.build()

		def filename = name + "_" + annoname
		filename = filename + "_" + gaussSigma + "_" + outputDownsample+ "_" + tilesize + ".ome.tif"
		//filename = filename + ".ome.tif"
		println 'Writing OME-TIFF ' + filename
		def pathOutput = outpath + filename

		new OMEPyramidWriter.Builder(myServer)
				.compression(compression)
				.parallelize()
				.tileSize(tilesize)
				.scaledDownsampling(outputDownsample, pyramidscaling)
				.build()
				.writePyramid(pathOutput)

		println('Done:' + filename)
    }
}

println('Finished!')


// ******  CLASS DEFINITION : Smoothing RGB Server  *******

class TypeConvertServerRGBsmooth extends TransformingImageServer<BufferedImage> {
    ImagePlus imp
    CLUPATH clijx

    double gaussSigma

    ColorModel cm
    ImageServer<BufferedImage> currentserver
    ArrayList<ColorTransformer.ColorTransformMethod> colortransformation = new ArrayList<ColorTransformer.ColorTransformMethod>()

    TypeConvertServerRGBsmooth(ImageServer<BufferedImage> server, double gaussSigma) {
        super(server)
        currentserver = server
        this.gaussSigma = gaussSigma

        colortransformation.add(ColorTransformer.ColorTransformMethod.Red)
        colortransformation.add(ColorTransformer.ColorTransformMethod.Green)
        colortransformation.add(ColorTransformer.ColorTransformMethod.Blue)

        cm = ColorModelFactory.getProbabilityColorModel32Bit(currentserver.getMetadata().getChannels())

        // CLIJX
        // Request 1st GPU
        clijx = CLUPATH.getInstance()
        // or request a specific GPU
        // clijx = CLUPATH.getInstance("GeForce")
        print(clijx.getGPUName() + '\n')
    }

    public ImageServerMetadata getOriginalMetadata() {
        return currentserver.getMetadata()
    }

    @Override
    protected ImageServerBuilder.ServerBuilder<BufferedImage> createServerBuilder() {
        return currentserver.builder()
    }

    @Override
    protected String createID() {
        return UUID.randomUUID().toString();
    }

    @Override
    public String getServerType() {
        return "My RGB Type smoothing image server"
    }

    public BufferedImage readBufferedImage(RegionRequest request) throws IOException {
        int idx, idxRoi

        String path = request.getPath()
        int ds = request.getDownsample()

        int z = request.getZ()
        int t = request.getT()

        // Number of pixels of region extension
        int extNP = Math.max(3, (int)(3 * gaussSigma))

        int extNPminX, extNPmaxX, extNPminY, extNPmaxY
        extNPminX = extNPmaxX = extNPminY = extNPmaxY = extNP

        int xExt = request.x - extNPminX * ds
        int yExt = request.y - extNPminY * ds

        if (xExt < 0) {
            extNPminX = (int) (1.0 * request.x / ds)
            xExt = request.x - extNPminX * ds
        }
        if (yExt < 0) {
            extNPminY = (int) (1.0 * request.y / ds)
            yExt = request.y - extNPminY * ds
        }

        int wExt = request.width + (extNPminX + extNPmaxX) * ds
        int hExt = request.height + (extNPminY + extNPmaxY) * ds

        //print 'request: ' + ds + ' ' + request.x + ' ' + request.y + ' ' + request.width + ' ' + request.height + '\n'
        //print 'requestExt: ' + ds + ' ' + xExt + ' ' + yExt + ' ' + wExt + ' ' + hExt + '\n'

        // Create extended region to avoid edge effects of gaussian scmoothing
        ImageRegion region = ImageRegion.createInstance(xExt, yExt, wExt, hExt, z, t);
        RegionRequest requestExt = RegionRequest.createInstance(path, ds, region);
        def img = getWrappedServer().readBufferedImage(requestExt)

        def raster = img.getRaster()

        int nBands = raster.getNumBands()
        int w = img.getWidth()
        int h = img.getHeight()
        int wRoi = w - extNPminX - extNPmaxX
        int hRoi = h - extNPminY - extNPmaxY

        SampleModel model = new BandedSampleModel(DataBuffer.TYPE_BYTE, wRoi, hRoi, nBands)
        byte[][] bytes = new byte[nBands][wRoi*hRoi]
        DataBufferByte buffer = new DataBufferByte(bytes, wRoi*hRoi)
        WritableRaster raster2 = Raster.createWritableRaster(model, buffer, null)

        float[] pixels, pixelsRoi

        int[] rgb = img.getRGB(0, 0, w, h, null, 0, w)

        for (int b = 0; b < nBands; b++) {
            pixels = ColorTransformer.getSimpleTransformedPixels(rgb, colortransformation.get(b), null);

            // CLIJX
            ClearCLBuffer input = clijx.pushArray(pixels, w, h, 1)
            ClearCLBuffer result = clijx.create(input)

            clijx.gaussianBlur(input, result, gaussSigma, gaussSigma)

            imp = clijx.pull(result)
            pixels = (float[]) imp.getProcessor().getPixels()

            // Crop original region from extended region
            pixelsRoi = new float[wRoi * hRoi]
            for (int y = extNPminY; y < h - extNPmaxY; y++) {
                idx = y * w + extNPminX
                idxRoi = (y - extNPminY) * wRoi
                System.arraycopy(pixels, idx, pixelsRoi, idxRoi, wRoi)
            }

            raster2.setSamples(0, 0, wRoi, hRoi, b, pixelsRoi)

            input.close()
            result.close()
        }

        return new BufferedImage(cm, Raster.createWritableRaster(model, buffer, null), false, null)
    }
}

3 Likes

Hey Peter @phaub ,

Minimal speedup can be achieved by not making any ImageProcessor and ImagePlus. This would mean, you create a float array, wrap a JNI FloatBuffer around and then copy bytes from the CL-buffer source directly. CLIJ does it so internally:

Furthermore, I’m sure you’re aware, just in case others read here our discussion: A single operation executed on the GPU may not bring any speedup, as transfer between CPU and GPU memory takes time. I would suggest implementing longer workflows on the GPU. What would be your next processing step?

Let me know if this helps!

Cheers,
Robert

2 Likes

Thanks @haesleinhuepf for your hint.
I will compare the clij script version with a version using ImageJ portions to smooth the image.

Here smoothing is the only processing.
One of my goals of this script was to figure out and to show how easy it is to use clij in QuPath and how stable it works.
I am working on another script with stacked clij functions and curious about your comment.
Maybe I can combine both scripts. But at the moment I’m struggling with output format issues.

1 Like

As another comparison, here’s a version that just uses QuPath (+ OpenCV + Bio-Formats as built-in dependencies):

import qupath.lib.images.writers.ome.OMEPyramidWriter

def tilesize = 512
def outputDownsample = 1
def pyramidscaling = 2
def compression = OMEPyramidWriter.CompressionType.J2K_LOSSY     //J2K //UNCOMPRESSED //LZW

def imageData = getCurrentImageData()

def op = ImageOps.buildImageDataOp()
    .appendOps(ImageOps.Filters.gaussianBlur(10.0),
               ImageOps.Core.ensureType(qupath.lib.images.servers.PixelType.UINT8))
    
def serverSmooth = ImageOps.buildServer(imageData, op, imageData.getServer().getPixelCalibration())
print serverSmooth.getPreferredDownsamples()

def pathOutput = buildFilePath(PROJECT_BASE_DIR, "smoothed-32.ome.tif")

new OMEPyramidWriter.Builder(serverSmooth)
				.compression(compression)
				.parallelize()
				.tileSize(tilesize)
				.channelsInterleaved() // Usually faster
				.scaledDownsampling(outputDownsample, pyramidscaling)
				.build()
				.writePyramid(pathOutput)

I set the Gaussian blur to be much larger just to make it clear that it was really applied.

Note the bottleneck remains the OME TIFF export so I’d expect the time taken for smoothing itself to be negligible, regardless of how it is applied. The main advantage of the script I have posted is that it doesn’t require pulling all the pixels at one go (which can exceed the capacity of a Java array).

Note also that if you apply a threshold in QuPath there is a prefilter operation that can effectively apply a Gaussian (or other kind of) filter immediately before the threshold is applied, thereby reducing artifacts dynamically; most sophisticated built-in QuPath commands (e.g. cell detection) also involve filtering steps internally. There it usually it shouldn’t be necessary to export another image.

(PS. It looks to me that the original patterns are in 8x8 pixel blocks, so I presume JPEG compression was used.)

EDIT: I didn’t add support for exporting annotations, but serverSmooth is just a regular QuPath ImageServer. So you can merge this with the original script from @phaub or use the exporting images docs and writeImageRegion.

3 Likes

One thing I was thinking about in this case was that when using the pixel classifier, only the first feature, which is Gaussian, would really apply this, correct? All other features, like the Laplacian, Weighted deviation, etc would be based on those raw checkerboard patterns - at least at the highest resolutions, where the edges are necessary to detect the very fine filaments.

Agreed - it is a bit weirder than normal though. I often see 8x8 blocks, but in this case some sub-blocks have the checkerboard patter. Not the full 8x8 block.

1 Like

@Research_Associate No, the pixel classifier’s features all involve Gaussian smoothing to some extent, based upon a ‘scale’ that must be >= 0.5 (the feature name is Laplacian of Gaussian, rather than Laplacian).

There are some differences, however, in that some features require derivatives – implemented by applying 1D Gaussian filters separably. It wouldn’t be possibly to achieve exactly the same result if you only have a 2D prefiltered image to work with.

1 Like

Ah, good to know, I had thought the “Scale” was a straight mean of the pixels - downsampling. But maybe that is only the case for the Resolution setting (in the pixel classifier still)?

Yes, the resolution determines downsampling and the scale refers to the Gaussian sigma.

2 Likes

I agree it looks odd, although tiffinfo suggests that it is indeed JPEG compression:

TIFF Directory at offset 0x8bd12ca6 (2345741478)
  Subfile Type: (0 = 0x0)
  Image Width: 145687 Image Length: 51317 Image Depth: 1
  Tile Width: 256 Tile Length: 256
  Bits/Sample: 8
  Compression Scheme: JPEG
  Photometric Interpretation: RGB color
  YCbCr Subsampling: 2, 2
  Samples/Pixel: 3
  Planar Configuration: single image plane

You can also get some extra info via QuPath with getCurrentServer().dumpMetadata() (doesn’t always work, but for most simple images should return whatever QuPath can get via OpenSlide/Bio-Formats/whatever image reader was used).

2 Likes

Thanks Pete @petebankhead for this short and clear script.
There is one drawback. It creates images with data type float32 instead of uint8.
The file size is 30 times larger because of the data type and because the J2K_LOSSY compression can not be applied. And the colors are different from the original.

Yes, that is correct. The processing time is almost identical if the clij smoothing is replaces by ImageJ smooting with GaussianBlur().

The main question in the referenced post was how to deal with

A global threshold did not seem to be applicable. The idea was to solve this by a local threshold method. But then the ‘checkerboard patterns issue’ showed up and had to be handled in the first place. So, the situation is a little unusual.
My plan is to include the local threshold into the script and to save it as an additional channel. Then object detection can be done on this local threshold channel. Since this local threshold operation is much more expensive I I have used the clij as a basis for fast GPU processing.
Next version to come …

Agreed - the checkerboard patterns are totally unusual and not typical JPEG artifacts. I have tested JPEG quality 0 to 100 in steps of 5 in ImageJ. There is no such checkerboard pattern.

I’m not sure if I understand the message.
My view is that measurements are questionable because of the artifact pattern - especially when derivatives are used (see above: DAB absorbance display).
Without artifacts the image would look much more like the smoothed version. Segmentation, detection and measurement are more reliable in the smoothed image.
I agree in that results achieved from the image with artifacts and the image without artifacts are most probably not exactly the same. But I’m not sure which ones are more correct.

1 Like

Good point, I didn’t pay attention to that. Updated the script now to give an 8-bit output. It simply involved adding an ensureType operation.

def op = ImageOps.buildImageDataOp()
    .appendOps(ImageOps.Filters.gaussianBlur(10.0)
               ImageOps.Core.ensureType(qupath.lib.images.servers.PixelType.UINT8))

Color difference is likely to be due to the brightness/contrast settings not defaulting to the 0-255 range for a 32-bit image.

The pixel classifier in QuPath can be more sophisticated than a global threshold, but I realise it won’t necessarily outperform the local threshold method.

However, I think any method that involves writing a new image will be quite inefficient. Note that you can set the ImageJ plugins directory – and that includes setting it to use an existing Fiji installation. Not all Fiji plugins will work, but rather only a subset that are really ImageJ1-based – but this seems to include local thresholding. Therefore you would be able to call this directly from within QuPath via sending a region to ImageJ, if you’ve set you ImageJ plugins directory to point to Fiji.

Nor am I; my point was only in reply to @Research_Associate to clarify that the pixel classification features aren’t implemented by first applying a 2D smoothing, therefore separation out the smoothing step won’t give identical results.

It’s largely a technical detail that doesn’t have a huge impact: the main message still holds that QuPath is applying smoothing (of some kind) before detection. If any actual measurements are made over more than a single pixel (e.g. mean value within a ROI) then this is itself a kind of smoothing as well at the measurement stage.

Basically, a goal of QuPath is to try to provide pragmatic built-in commands for detection and measurement that are able to cope with much/most real-world data, so that things like compression artifacts don’t have to be handled separately… usually.

2 Likes

I love this simply solutions in QuPath :slightly_smiling_face:

I see this scripting here mainly as an experiment. Experimenting with QuPath, groovy, clij etc. with this specific use case in mind. If it will turn out that local thresholding would be a good option - fine.

Absolutely!
If this use case is not a routine work then this approach might be accaptable.

The problem is that this plugin is rather slow. My hope was that I can build a faster version with clij.

I really like this concept and your implementation in QuPath.
Every day I’m more enthusiastic about it. :ok_hand:
And I also love the possibility to do such strange stuff like I try to do it here. :upside_down_face:

2 Likes

Image data type and color OK. :white_check_mark:
Output file still a little bit larger (~ 30%). I will be watching this.

A small issue: There is a comma missing after appendOps(ImageOps.Filters.gaussianBlur(10.0) .

Added the comma now :slight_smile: