Finding focused slices

Dear all,

this is a small account on usage of the Microscope Image Focus Quality plugin which I have implemented as part of my pipeline, and some of the problems I have encountered in the process and how I resolved them. My hope is this can be of help to anyone else, and maybe get some suggestions on how to improve it.

The general problem:
From a stack of multichannel fluorescence images, with each channel comprising a stack collected over a defined range of focal planes, obtain a focused image for each channel in order to create a composite image with all channels in focus.

The specific problem:
The focus plane may not be the same between different channels, which means that a routine should be implemented to seek the most focused image from each channel. From testing multiple plugins I came to the conclusion that for my images (512x512 pixels, obtained with a 60X oil immersion objective) the plugin “Microscope Image Focus Quality” was the best performer. However, even with this plugin the results were often a mixed bag - most of the times it was a near miss, it would identify as the most focused slice one which would be slightly out of focus; other times it would totally get it wrong (!!); a few times i would get it right.

Some comments:
The key thing I came to understand in using this plugin is that my images differ significantly from the ones used to teach the neural network model that underlies this plug in. The key differences appear to be related with (1) overall image size, i.e nr of pixels; (2) feature size and number of features, related with magnification/resolution as well as with the sample itself and sample preparation itself.

The example below shows one of my images in focus, with some elements showing out of focus blurring which the plugin could not identify as being the correct focal plane.

If you have tried to use the plugin, you will know that once the plugin is loaded it automatically sets the maximum number of patches that it can work with for a given image size. Each patch covers a required number of pixels and cannot be reduced in order to accommodate a larger number of patches. This can cause the plugin to fail if for example the features that occur within each patch are significantly larger or smaller that the objects used to train the model.

To circumvent this problem, it seems that one can interpolate the image to make it larger or smaller so that the object size in terms of number of pixels are more inline with the sizes of the feature used to train the model (nuclei). So, for the nuclei above I reduced the image from 512x512 to 128x128. In images where I have much smaller features, I increased the image size to 1024x1024.

With this approach the plugin now works very well for for images where previously it did not. This same approach does not work with other plugins that look predominantly at variances.

Currently, the biggest drawback I see from using this plugin is its slow speed and the fact that it does not work on stacks so one has to write a bit of code to get the image in focus out of a stack.

Please drop a comment if you have any!

Best,
R.

2 Likes

This is to update everyone with respect to the task of finding focused slices within a stack.

After using some of the plugins available in the ImageJ repository and not really having successfully obtained good results for my images, I eventually resorted to the “Microscope Image Focus Quality” plugin, which was developed using Neural Networks and pattern recognition algorithms and aims at classifying microscope images with respect to their focus quality whereby it maps an image in a series of square patches and attributes a “focusing score” to each patch.

The results I obtained with this plugin were the best, but it had some major drawbacks: (1) it was painfully slow; (2) images may need being scaled so that patterns present in the image match the size of objects used for teaching the algorithm; (3) since the end-result is a map and not a global score, the map needs to be processed, and (4) at the time of writing it does not process entire stacks. The last two points mean that some good amount (although not complicated) coding needs to be done. But it is the slow speed of the whole process that gives me the most grief. Processing one stack with 77 slices takes me circa 5 mins just to find a most focused slice in one stack. In my case, having 3 fluorescence channels per file, it means investing 1h of computing time to find the most focused slices across 4 files. It felt ridiculous, but it was the approach that gave me better results.

More recently, I came across another plugin called “Focus_LP” by Helmut Glünder that has consistently given very good results while processing the full 77 slices in less than one second. Compared with the “Microscope Image Focus Quality” plugin the results are very similar and of high grade. Furthermore, I did not find the need to scale images to make them bigger or smaller, and has worked well for me and my 512x512 pixel images shown above. The output is exactly what one needs for this sort of job - the indication of the slice nr where the focus of a selected area (in my case the entire image) is of highest quality. Integration within a macro is easy.

The "Focus_LP" plugin can be downloaded from the website of Helmut Glünder at www.gluender.de/, or more precisely, here.

If you have alternative solutions for this problem, please leave a comment on this thread.

Very best,
R.

2 Likes

I don’t know if this is still relevant, but maybe someone else will find this useful. The easiest way to find best focused plane in the stack is to run Laplacian on each layer of the stack and then pick the one with highest variance. This method of focus assessment is described here Analysis of focus measure operators for shape-from-focus DOI:10.1016/j.patcog.2012.11.011 .
Here is an example implementation in python with numpy and opencv:

import numpy as np
import cv2 as cv
from typing import List
Image = np.ndarray

def laplacian_variance(img: Image) -> float:
    return np.var(cv.Laplacian(img, cv.CV_64F, ksize=21))


def find_best_z_plane_id(img_list: List[Image]) -> int:
    lap_vars_per_z_plane = []
    for img in img_list:
        lap_vars_per_z_plane.append(laplacian_variance(img))
    max_var = max(lap_vars_per_z_plane)
    max_var_id = lap_vars_per_z_plane.index(max_var)
    return max_var_id
1 Like

In addition, this comparative study may help to understand which approach does best in which situation.

However be warned that what the authors call “Laplacian” isn’t mathematically correct …

Here you find an ImageJ-macro that uses a refined approach (implemented as an ImageJ-plugin) to solve the task in question.

In any case it must be realized that …
… the best focused slice of a stack needs not be perfectly in focus because it merely is the relatively best focused slice of a focus series according to a kind of spatial majority decision regarding the focus of the slice or selection. Consequently, the results will strongly depend on the depth-resolution of the focus series and on the image content.
Especially for imaged 3D-objects, regional focus analyses (using small ROIs) provide more meaningful results than global ones.

1 Like