Registration running time between QuPath and openslide-python using FindTransformECC

Hello everyone !

I am currently investigating different packages and methods to perform automatic registration on histopathology images (typically register HE and IHC image of the same tissue sample). I am limiting my investigation to rigid (Euclidean) transforms for now.

I found QuPath “Image alignment” tool very convenient and quick to perform such auto registration. It seems QuPath implementation autoAlign is mainly relying on openCV implementation of findTransformECC (see link). It takes less than a second on QuPath to successfully register my test images.

Nonetheless, when I’m trying to implement a minimal version using opencv-python, the registration takes more than 10 seconds, and the result is not satisfying.

Here is my code:

import cv2
from openslide import OpenSlide
import timeit
import numpy as np

def ecc_tr(fixed_im, moving_im, nb_iterations):
    start = timeit.default_timer()
    # Read the images to be aligned: only read last thumbnail (low dimension)
    fixed_thumbnail = np.array(fixed_im.get_thumbnail(fixed_im.level_dimensions[-1]))
    moving_thumbnail = np.array(moving_im.get_thumbnail(moving_im.level_dimensions[-1]))
    # Convert images to grayscale
    im1_gray = cv2.cvtColor(fixed_thumbnail,cv2.COLOR_BGR2GRAY)
    im2_gray = cv2.cvtColor(moving_thumbnail,cv2.COLOR_BGR2GRAY)
    # Find size of image1
    sz = im2_gray.shape
    # Define the motion model
    warp_mode = cv2.MOTION_EUCLIDEAN
    # Define 2x3 (affine) matrix and initialize the matrix to identity
    warp_matrix = np.eye(2, 3, dtype=np.float32)
    # Specify the number of iterations.
    number_of_iterations = nb_iterations
    # Specify the threshold of the increment in the correlation coefficient between two iterations (if negative, uses the maximal number of iterations)
    termination_eps = -1
    # Define termination criteria
    criteria = (3, number_of_iterations,  termination_eps)
    # Run the ECC algorithm. The results are stored in warp_matrix.
    (cc, warp_matrix) = cv2.findTransformECC(im1_gray,im2_gray,warp_matrix, warp_mode, criteria, None, 5)
    # Use warpAffine for Translation, Euclidean and Affine
    im2_aligned = cv2.warpAffine(moving_thumbnail, warp_matrix, (sz[1],sz[0]), flags=cv2.INTER_LINEAR + cv2.WARP_INVERSE_MAP, borderValue=(255, 255, 255))

    stop = timeit.default_timer()
    print(f"Elapsed time: {round(stop - start, 3)} seconds")
    return fixed_thumbnail, moving_thumbnail, im2_aligned, warp_matrix, round(stop - start, 3)

fixed_im = OpenSlide("reference.svs")
moving_im = OpenSlide("moving.svs")
fixed_thumbnail, moving_thumbnail, im2_aligned, warp_matrix, timing = ecc_tr(fixed_im, moving_im, 100)

I have 2 issues compared to QuPath in terms of both tranformation quality and processing time:

  • first the transformation is really close to the identity after 100 iterations: the tissue is nearly not moved at all
  • it takes more than 10 seconds to run this piece of code whereas QuPath takes less than a second.

Details of OpenCV documentation is here: findTansformECC. I deliberately set a negative epsilon to enforce the algo to iterate over 100 iterations like QuPath.

Do you have any idea of what I am doing wrong here ?

PS: I can definitely increase the number of iterations to have better results. If I set 1000 iterations, the result is great… but it takes 94 seconds :no_mouth:

Hi @polklin, I wrote the QuPath code – I don’t see anything obvious that would explain the difference. Just a few quick thoughts:

  • QuPath has weird logic for determining the image sizes (which probably needs updated), but it won’t necessarily match the thumbnail size obtained from OpenSlide. Therefore perhaps it’s simply operating on a lower-resolution image and this makes it faster.
  • findTransformECC is sensitive to a good initialization – so possibly initializing it from a smaller image then refining the transform at a higher resolution would help.
  • I think OpenSlide will be returning RGB rather than BGR (not 100% sure here)
  • QuPath won’t be including warpAffine in the time (but I expect it is very fast, so not influencing things here)
1 Like

Thank you very much for your response @petebankhead !

Indeed, the image sizes is a bit cryptic in QuPath code. That is why I started with the last pyramid level but your suggestion is perfectly accurate: by working with downscale images (I took a scale factor of 4 on the thumbnails), the algorithm now run in a second.

I even obtain the best performances on my validation set in terms of IoU after registration between HE and IHC tissue annotations: I guess the algorithm was paying too much attention on details by working on bigger images.

Trying to refine the transformation by initializing findTranformECC with the matrix computed on low resolution images did not improve my results, it just makes it longer. So now I will only compute the registration on the downscale images.

Thanks !

PS: you are right, OpenSlide returns RGB rather than BGR format :slight_smile:

1 Like