DeconvolutionLab2: Output range/suitable postprocessing?

Hi, I’m making my first steps in 3D-deconvolution with DeconvolutionLab2. The data I work with are reconstructed holographic microscopy volumes. Objects tend to be smeared along the z-axis and I am trying to get rid of this effect (see e.g. this paper for principle).

I created a PSF myself by propagating a phantom “cell”, creating an artificial hologram, and reconstructing it as a 3D volume with the same dimensions as the input image. I tried both the Richardson-Lucy and the Regularized inverse filter algorithms. The results look very similar, but I’m not quite sure how to handle them or if they are supposed to look like that. The output seems to vary significantly in brightness and contrast from slice to slice and the value range is something from -5 to 20 (input data type was uint8). I’ve attached some slices of the output volume, any advice on how to interpret this is much appreciated :slight_smile:

Hi @speedymcs

Typically DeconvolutionLab2 and other deconvolution plugins are used with images from fluorescence microscopy, where the images are formed by detecting photon emissions with a camera. So ideally there are not negative values in the input or output (edit:there can be negative values, depending which algorithm is used, but these are usually considered artifacts)

It looks like you have negative values in your output?? Do you have negative values in the input as well?? What about the PSF? (I am not knowledgeable about holography, so am not sure what would be typical).

So I think there are two issues. First there are a mathematical/physics issues. Is the forward model of holography approximate to a linear spatially invariant model? In other words is the image formed by convolution? If not the deconvolution results may not be valid.

The second issue is the display of images with negative values. These can make the image hard to interpret, as the display ranges are always shifting, and the background can appear to shift (in your example it goes from gray, to white, to black). The background may be constant, and in fact it is the lowest and highest values that are shifting, thus changing the display range.

Maybe you can try casting the image to 16 bit, and looking at it with a constant display range.

Also, it is always useful to create an axial view of deconvolution results, so you can see what is happening along the depth dimension. Go to Image->Stacks->Reslice. It would be interesting to see axial (xz or yz) slice views of the original and deconvolved images. If you have time, maybe you can post them here, and it would give us better insight as to what is happening in the 3rd dimension.

Brian

Hi @bnorthan, thanks for your reply!

The principle of using deconvolution in holography is well-founded in my view, there’s quite a few papers out there describing its use, e.g. the one I linked to in my first post. I’m pretty sure that the PSF I used is the main factor that needs to be improved after my first try. I will try working with reconstructed amplitude images instead of phase images next, as I think there were artificial phase jumps created in the PSF. I had not thought about those–when reconstructing the hologram of a real world phase object, these phase jumps are corrected by unwrapping, but I doubt this works correctly in my case.

The value range I got is actually -14.011 to 25.288. In the meantime I managed to normalize each slice using python after saving them as .tif files. The stack actually looks pretty decent and homogeneous now, though of course it’s still weird that the output looked like that when you say it would usually be considered artifacts. Both the PSF and the input image I fed to DeconvolutionLab2 were unsigned 8-bit images, so no negative values anywhere.

Reslicing was great advice! I’ve attached some screenshots to illustrate my task.

The first one shows the input image, a holographic reconstruction of a couple of cells in solution, with quite strong artefacts. I circled a cell which is almost in focus in the x-y-view. In the resliced representation you can see how the cell is smeared along the z-axis.

The second one shows the same image deconvoluted. It’s quite blurry, but you can see some effect I’d say–in the resliced view, the cell image has a much smaller extent along the z axis. I’m not so sure about its exact location though.

The last one is the PSF I used (thresholded).