this is a small account on usage of the Microscope Image Focus Quality plugin which I have implemented as part of my pipeline, and some of the problems I have encountered in the process and how I resolved them. My hope is this can be of help to anyone else, and maybe get some suggestions on how to improve it.
The general problem:
From a stack of multichannel fluorescence images, with each channel comprising a stack collected over a defined range of focal planes, obtain a focused image for each channel in order to create a composite image with all channels in focus.
The specific problem:
The focus plane may not be the same between different channels, which means that a routine should be implemented to seek the most focused image from each channel. From testing multiple plugins I came to the conclusion that for my images (512x512 pixels, obtained with a 60X oil immersion objective) the plugin “Microscope Image Focus Quality” was the best performer. However, even with this plugin the results were often a mixed bag - most of the times it was a near miss, it would identify as the most focused slice one which would be slightly out of focus; other times it would totally get it wrong (!!); a few times i would get it right.
The key thing I came to understand in using this plugin is that my images differ significantly from the ones used to teach the neural network model that underlies this plug in. The key differences appear to be related with (1) overall image size, i.e nr of pixels; (2) feature size and number of features, related with magnification/resolution as well as with the sample itself and sample preparation itself.
The example below shows one of my images in focus, with some elements showing out of focus blurring which the plugin could not identify as being the correct focal plane.
If you have tried to use the plugin, you will know that once the plugin is loaded it automatically sets the maximum number of patches that it can work with for a given image size. Each patch covers a required number of pixels and cannot be reduced in order to accommodate a larger number of patches. This can cause the plugin to fail if for example the features that occur within each patch are significantly larger or smaller that the objects used to train the model.
To circumvent this problem, it seems that one can interpolate the image to make it larger or smaller so that the object size in terms of number of pixels are more inline with the sizes of the feature used to train the model (nuclei). So, for the nuclei above I reduced the image from 512x512 to 128x128. In images where I have much smaller features, I increased the image size to 1024x1024.
With this approach the plugin now works very well for for images where previously it did not. This same approach does not work with other plugins that look predominantly at variances.
Currently, the biggest drawback I see from using this plugin is its slow speed and the fact that it does not work on stacks so one has to write a bit of code to get the image in focus out of a stack.
Please drop a comment if you have any!