Skimage warp - determine output shape so output does not crop?

I am trying to estimate Affine or Euclidean transforms via skimage.transform over key points (colorful crosses in two square pictures in the figure below) and use skimage.transform.warp to create a composite like shown on the bottom left (“Want:”). I must be missing the obvious, but I don’t know how to auto-pad the output to get that result: I am stuck with what you see on the bottom right (“Get:”), i.e. a cropped image. I tried to play with the output_shape of warp(), but that doesn’t get me far.

What is the trick here?

1 Like

Unfortunately there’s no easy answer here. Neither skimage.transform.warp nor scipy.ndimage.{geometric,affine}_transform support negative coordinates in the output. In other words, you will only ever see coordinates (0, 0) → output_shape in the output. What this means is that, assuming your desired output image above is 1000x1000, you need the three points to be around 500x500, and for that to happen, you need to pad the input images so that one of them has the points at that position.

Alternatively, you can modify the transform itself so that you warp both images to ensure that they are in the positive space. You can do this by computing the corner coordinates after transform, and adding a translation so that all four corners land in the positive domain.

Thanks @jni. I did indeed end up with just padding source images / shifting coordinates before warping to make sure the output is not cropped. Maybe it could be an idea to implement such functionality into the current skimage functions?

My two cents… You may also define a global domain shared by both images and compose the affine transformations:

from matplotlib import pyplot as plt
import numpy as np
from skimage.data import coins
from skimage.transform import warp
from matplotlib.patches import Rectangle

img = coins()

# img0 bbox in img space                                                                                                                                                                                                                                                   
i0_0, j0_0, i1_0, j1_0 = 110, 10, 270, 250
img0 = img[i0_0:i1_0, j0_0:j1_0]

# img1 bbox in img space                                                                                                                                                                                                                                                   
i0_1, j0_1, i1_1, j1_1 = 5, 150, 200, 350
img1 = img[i0_1:i1_1, j0_1:j1_1]

# The transformation from img1 space to img0 space                                                                                                                                                                                                                                
t1 = i0_0 - i0_1
t0 = j0_0 - j0_1

loc_tf = np.eye(3)
loc_tf[:2, 2] = t0, t1

# img0 in "global" reconstruction img
out_shape = 400, 400
glob_tf0 = np.eye(3)
glob_tf0[:2, 2] = -20, -150  # Arbitrary

img0_glob = warp(img0, glob_tf0, output_shape=out_shape)

# img1 in "global" img
glob_tf1 = loc_tf.dot(tf0)  # Composition here
img1_global = warp(img1, glob_tf1, output_shape=out_shape)

# Assemble img0 and img1 in "global" img
mask0 = warp(np.ones_like(img0), afft1, output_shape=out_shape)
mask1 = warp(np.ones_like(img1), afft2, output_shape=out_shape)

mask = mask0 + mask1

img_global = img0_glob + img1_global
img_global[mask > 0] /= mask[mask > 0]

# Display
r0 = Rectangle((j0_0, i0_0), j1_0-j0_0, i1_0-i0_0, fill=False, ec='r')
r1 = Rectangle((j0_1, i0_1), j1_1-j0_1, i1_1-i0_1, fill=False, ec='b')

fig, (ax0, ax1, ax2) = plt.subplots(1, 3)
ax0.imshow(img, cmap='gray' )
ax0.add_patch(r0)
ax0.add_patch(r1)
ax1.imshow(mask, cmap='gray')
ax2.imshow(img_global, cmap='gray')

plt.show()

Indeed! See Add support for negative coordinates in warp · Issue #5330 · scikit-image/scikit-image · GitHub

1 Like

I feel like that’s a big lift for users — we should try to simplify that workflow. Still, that code would make a nice example, I think! Perhaps with some slightly more sophisticated transforms from transform._geometric.

There’s a small subtlety here, that is the reason it’s not part of warp: warp takes the inverse transform as input. In order to know how to size and reposition the output to fit, you need to know the forward transform. I.e., you want the corner coordinates of your original image, forward transform it to the output, and from that learn the desired output shape and by how much it needs to be translated to be in the positive domain. For most transforms, you can obtain the forward transform from the inverse, but it is not always true.

For now, the simplest may be a utility function that provides output_shape and a translation transform, that can then be subtracted from the inverse transform.

1 Like

Still, that code would make a nice example, I think! Perhaps with some slightly more sophisticated transforms from transform._geometric .

It is already my intension, hoping to find some time to open a PR soon :wink:

1 Like