Template matching using SSIM

Hi Folks,

I was looking for a program that can do template matching using structural similarity index measurement SSIM. The idea is to have a directory of fly wings and then choose one specific orientation I would want to align all other wings to, to do so I was thinking of computing SSIM between the template and other images in the directory, if the SSIM is below a threshold I would do affine transformation such as rotation and flips till the SSIM goes above a certain preset threshold. Does something like this exists already in python or Fiji?

Hi @kapoorlab,

The existing multi-template matching supports “only” the template matching metrics from opencv.

Multi-Template Matching for object-localisation in Fiji - Announcements - Image.sc Forum

However you could try to adapt the python package to support you custom metric.
Like you could try to modify the function computeScoreMap

1 Like

Thanks for the link @LThomas , well in this program if it does not find a match does it tries to do rotations/flips to see if that produces a better match to the template? If not then maybe I could try to add that in your code and make a PR.

Currently not, you provide a predefined set of templates to search (geometric transformations or different objects) each of same importance, and it will look for each of them and finally merge and filter the detections to remove overlapping detections.

The net result is that each object in the image should be detected once by its “best matching template”.
It’s kind of a brute force approach, but this way the candidate templates are not limited to transformations of the original templates, they can represent different objects or in biology development stages…
So you can use it for simultaneous detection of various object or even classification.

If you expect a single match per image, and you have a defined score threshold, then you could use the existing code and make a custom while loop which at each iteration :

  1. generate a new template by rotation or flipping
  2. Look for matches with MTM.findTemplates with N_object=1 and your custom score threshold

If it finds something at a given iteration, then you stop iterating, otherwise you keep rotating the template and searching.
But with this “early-stopping”, you would never know if you would get a better match with the following untested rotations…

What I could think of right now to improve the current implementation would be that if you know that you are looking for geometrical transformations, we could maybe compute the transformed templates progressively instead of providing a full list of already rotated templates.

So some kind of lazy template transformations (not limited to rotation, I am thinking taking an instance of data-augmentation factory as in deep-learning training).

It would save a bit of memory compared to providing a lsit of template images but it would be some effort I think and you have to keep in mind that the more templates the longer the search !

1 Like

Thanks @LThomas, I think what you are suggesting by generating a new template is what I am looking for because my goal is to put all the wings in the same orientation, like for now I made them all aligned along the x axis by rotating them along the smallest eigenvector of the covariance matrix Co-ordinate Transformation
In your code I could have a fixed number of iterations which corresponds to the number of operations of rotations and flippings I have to do because given the experiment the flywings can be only in so many given orientations but irrespective of which orientation they are in I want to put them in the same axis aligned co ordinate system with area preserving transformations only.

I will check out your code and let you know how it goes. Thanks again.l