I am writing several autofocus algorithms such as absolute gradient, Brenner, Energy of Laplacian, normalized variance, etc. to find the infocus image among a stack of OCT images. The algorithms can find the sharpest image, but the problem is I have a minimum metric value at the sharpest image. When I use the intensity-based algorithms or autocorrelation algorithm I have a maximum value at infocus image but in case of gradient-based or statistical-based algorithms I have a minimum and it doesn’t make sense to me.
has anybody an idea?
Thanks in advance.