Autofocus algorithms

Hi all,

I am writing several autofocus algorithms such as absolute gradient, Brenner, Energy of Laplacian, normalized variance, etc. to find the infocus image among a stack of OCT images. The algorithms can find the sharpest image, but the problem is I have a minimum metric value at the sharpest image. When I use the intensity-based algorithms or autocorrelation algorithm I have a maximum value at infocus image but in case of gradient-based or statistical-based algorithms I have a minimum and it doesn’t make sense to me.
has anybody an idea?

Thanks in advance.
Sara

Sara,
I suggest that you follow the data. Or look at the numbers for some simple cases to evaluate what the various algorithms actually produce.
It seems to me that for an in focus image the difference between pixels will be maximised, or the contrast will be a maximum. After all this is how you see anything, by virtue of contrast. So measures which look for the differences will be larger when the image is focussed.
I am not sure about the details of some of the algorithms you mention, so I do not know how they are implemented. But they will be documented and by inputting simple case you can test them. I hope this is useful.

Hi Noelg,
I checked the numbers in absolute gradient algoithm as an example. This algorithm sums the absolute value of the first derivative (Sum of [|i(x+1,y) - i(x,y)|]).
I followed the data. It works properly. The only problem is that the difference between pixels in out of focus images is more than those of in focus image!!

Best regards,
Sara