What’s the best way to determine the best transformation to align two images in python?
Essentially, I have a binarized version of a scanned slide, as well as the binarized output of a deep learning classifier run on that slide. The classifier output is slightly translated and scaled (due to the nature of FCNs and UNets and the like). I would like to figure out how to automate the determination of the transformation to register the slides.
(I created the binarized slide image by applying a grayscale threshold filter of 0.75, and the binarized output image by mapping all background classes to 0 and all tissue classes to 1.)
Unfortunately, the binary images, while matching in shape, are nowhere close to being exact translations of each other - I can try to use opencv erode/dilate/blur to make them closer but I can never succeed exactly, as our models behave unpredictably on edges.
Here is an example set of images:
Superimposed (note the large gap in the bottom right edge):
I’ve looked into
skimage's registration module, but it doesn’t seem like it can handle this. I’ve also looked at
imreg_dft, but I was unable to get it to work (it was seemingly stuck for over an hour before I killed it).
How can I effectively determine the registration transformation between these two images?