Hello everyone !

I am currently investigating different packages and methods to perform automatic registration on histopathology images (typically register HE and IHC image of the same tissue sample). I am limiting my investigation to rigid (Euclidean) transforms for now.

I found QuPath “Image alignment” tool very convenient and quick to perform such auto registration. It seems QuPath implementation autoAlign is mainly relying on openCV implementation of *findTransformECC* (see link). It takes less than a second on QuPath to successfully register my test images.

Nonetheless, when I’m trying to implement a minimal version using opencv-python, the registration takes more than 10 seconds, and the result is not satisfying.

Here is my code:

```
import cv2
from openslide import OpenSlide
import timeit
import numpy as np
def ecc_tr(fixed_im, moving_im, nb_iterations):
start = timeit.default_timer()
# Read the images to be aligned: only read last thumbnail (low dimension)
fixed_thumbnail = np.array(fixed_im.get_thumbnail(fixed_im.level_dimensions[-1]))
moving_thumbnail = np.array(moving_im.get_thumbnail(moving_im.level_dimensions[-1]))
# Convert images to grayscale
im1_gray = cv2.cvtColor(fixed_thumbnail,cv2.COLOR_BGR2GRAY)
im2_gray = cv2.cvtColor(moving_thumbnail,cv2.COLOR_BGR2GRAY)
# Find size of image1
sz = im2_gray.shape
# Define the motion model
warp_mode = cv2.MOTION_EUCLIDEAN
# Define 2x3 (affine) matrix and initialize the matrix to identity
warp_matrix = np.eye(2, 3, dtype=np.float32)
# Specify the number of iterations.
number_of_iterations = nb_iterations
# Specify the threshold of the increment in the correlation coefficient between two iterations (if negative, uses the maximal number of iterations)
termination_eps = -1
# Define termination criteria
criteria = (3, number_of_iterations, termination_eps)
# Run the ECC algorithm. The results are stored in warp_matrix.
(cc, warp_matrix) = cv2.findTransformECC(im1_gray,im2_gray,warp_matrix, warp_mode, criteria, None, 5)
# Use warpAffine for Translation, Euclidean and Affine
im2_aligned = cv2.warpAffine(moving_thumbnail, warp_matrix, (sz[1],sz[0]), flags=cv2.INTER_LINEAR + cv2.WARP_INVERSE_MAP, borderValue=(255, 255, 255))
stop = timeit.default_timer()
print(f"Elapsed time: {round(stop - start, 3)} seconds")
return fixed_thumbnail, moving_thumbnail, im2_aligned, warp_matrix, round(stop - start, 3)
fixed_im = OpenSlide("reference.svs")
moving_im = OpenSlide("moving.svs")
fixed_thumbnail, moving_thumbnail, im2_aligned, warp_matrix, timing = ecc_tr(fixed_im, moving_im, 100)
```

I have 2 issues compared to QuPath in terms of both tranformation quality and processing time:

- first the transformation is really close to the identity after 100 iterations: the tissue is nearly not moved at all
- it takes more than 10 seconds to run this piece of code whereas QuPath takes less than a second.

Details of OpenCV documentation is here: findTansformECC. I deliberately set a negative epsilon to enforce the algo to iterate over 100 iterations like QuPath.

Do you have any idea of what I am doing wrong here ?

PS: I can definitely increase the number of iterations to have better results. If I set 1000 iterations, the result is great… but it takes 94 seconds