# Equivalent for matlab's PiecewiseLinearTransformation2D?

I am trying to use scikit-image estimators on user defined “anchor points” to warp images. Since the transformations are inhomogeneous throughout the whole image, I chose `PiecewiseAffineTransform`. What I really want however is a result comparable to the output of Matlab’s PiecewiseLinearTransformation2D. The issue here is that areas outside my user defined points are “cut” from the image while in the matlab input they are “inferred”:

I hope this makes sense looking at the attached output. The estimation/warping was run with the same user defined key/anchor point pairs. How can I get a result that is more like the Matlab version, retaining all those pixels that are “outside the borders” instead of setting them to nan / zero?

Can you share code and the input image?

Also, do you have some idea about what Matlab might be doing outside the control points?

Thanks for the quick reply, @jni !
I have uploaded the data as zip here: data.zip (56.5 KB)

It contains:

• grid_picture.npy (the original picture to be unwarped)
• original_points.npy (nx2 array of points (x,y))
• transformed_points.npy (nx2 array of points (x,y))

I created a gist here: Scikit image piecewise affine transform problem · GitHub

I am not sure I understand what Matlab is doing outside the points.

The method implemented is from Goshtasby (1986), “Piecewise linear mapping functions for image registration”. In Section 2.4 (Summary), they outline three steps of which skimage implements only the first two. The third step is:

Determine functions for mapping of points outside the convex hulls by extending the boundary triangle planes. Intersections of neighboring planes limit the extent of the boundary planes that should be used in the extrapolation.

It is not immediately obvious to me how to do this.

I played around with two approaches: a 4th-degree polynomial transform, and thin plate splines. The first worked OK, the second was great. The implementation is by Zachary Pincus, who is an emeritus skimage developer (he said that he’d consider releasing the code under BSD if someone found it useful).

I’d like to see thin plate splines in scikit-image. It has been requested as early as 2017 already.

``````# https://github.com/zpincus/celltool/blob/9f2da1a905413fedcca89ad71ea94c51b1f2da18/celltool/numerics/image_warp.py
# https://github.com/scikit-image/scikit-image/issues/2429#issuecomment-270493362
from image_warp import warp_images

import matplotlib.pyplot as plt

import numpy as np

grid_picture_unwarped = warp_images(
original_points[:, ::-1],
transformed_points[:, ::-1],
[grid_picture],
[0, 0, grid_picture.shape[1],
grid_picture.shape[0]]
)[0]

plt.imshow(grid_picture_unwarped)

plt.scatter(transformed_points[:, 0], transformed_points[:, 1], s=2)
plt.show()
``````

P.S. Please note the order of coordinates. Our `transform` module relies on coordinates being in XY instead of row-column format. This was an unfortunate decision on my part, which we are working on remedying.

2 Likes

This works beautifully, thanks so much!

For me the next step is to transform arbitrary points (x/y) from the original to the warped space. I was doing that previously with `tform.inverse(original_pixels)`, where `tform` is the piecewise affine transform estimated through scikit-image. How do I do that using the functions by Zachary (celltool/image_warp.py at 9f2da1a905413fedcca89ad71ea94c51b1f2da18 · zpincus/celltool · GitHub)?
EDIT: I posted this question here as well.
EDIT2: … and it was solved.

It would be immensely useful if Zachary’s functions became part of scikit-image.

Zachary agreed to relicense the code, so now we need a volunteer to take on the task of turning it into a transform. You wouldn’t happen to be interested in this fun little exercise?

I would love to help, but I think I will not find time for it for another two weeks at least. I saw you were discussing implementation a bit in the github issue. Let me know if you think I can help somewhere. It’s fantastic that it is already on its way.