Currently not, you provide a predefined set of templates to search (geometric transformations or different objects) each of same importance, and it will look for each of them and finally merge and filter the detections to remove overlapping detections.
The net result is that each object in the image should be detected once by its “best matching template”.
It’s kind of a brute force approach, but this way the candidate templates are not limited to transformations of the original templates, they can represent different objects or in biology development stages…
So you can use it for simultaneous detection of various object or even classification.
If you expect a single match per image, and you have a defined score threshold, then you could use the existing code and make a custom while loop which at each iteration :
- generate a new template by rotation or flipping
- Look for matches with
N_object=1 and your custom score threshold
If it finds something at a given iteration, then you stop iterating, otherwise you keep rotating the template and searching.
But with this “early-stopping”, you would never know if you would get a better match with the following untested rotations…
What I could think of right now to improve the current implementation would be that if you know that you are looking for geometrical transformations, we could maybe compute the transformed templates progressively instead of providing a full list of already rotated templates.
So some kind of lazy template transformations (not limited to rotation, I am thinking taking an instance of data-augmentation factory as in deep-learning training).
It would save a bit of memory compared to providing a lsit of template images but it would be some effort I think and you have to keep in mind that the more templates the longer the search !