Richardson_lucy deconvolution for 1d spectrum data

Sample image and/or code

The richardson_lucy deconvolution is a very powerful tool to deal with image deconvolution with the kernel. However, the problem I meet is deal with the experimental 1d signal, as I do, expanding my data to square 2d image by copy data along one axis, and do the same things with the kernel. Lucky, I get a looks-good result. And my question is that can the richardson_lucy deconvolution brings others kind of issue in 1d spectrum, because what I want is just the deconvolution of the 1d data. Thanks very much!!!

  • Upload an original image file here directly or share via a link to a file-sharing site (such as Dropbox) – (make sure however that you are allowed to share the image data publicly under the conditions of this forum).
  • Share a minimal working example of your macro code.

Background

  • What is the image about? Provide some background and/or a description of the image. Try to avoid field-specific “jargon”.

Analysis goals

  • What information are you interested in getting from this image?

Challenges

  • What stops you from proceeding?
  • What have you tried already?
  • Have you found any related forum topics? If so, cross-link them.
  • What software packages and/or plugins have you tried?

Looking at the Richardson-Lucy code, there is nothing there that is 2D-specific. You should be able to just pass in a 1D “image” directly, no need to replicate it…

And yes, I think it should work.

1 Like

Thank you for the response! Exactly, I had tried to utilize the 1d array as inputs of the Richardson-Lucy function, but the results seem wrong, which change some of the features of my original 1d data. Nevertheless, expanding the 1d array to a 2d image works well. What I caring is that whether the ‘expanding’ method brings some subsidiary effect which I not want.

Well, it’s unclear to me why the results should be different. Very unclear. :joy: Could you maybe provide some sample data, and inputs and outputs for both methods?

Actually, it is different.
for example, here, we can generate a raw data with x = np.linspace(-6,3,1000), and y = np.sin(x). The deconvolution kernel can be a sharp peak gaussian function.
If I use the 1d array for inputs, the result is below:
image
and if I use the expanding 2d array, the result seems more reasonable as below:
image

This almost looks like a data-type related overflow; please do provide some example code so we can look into it further. Thank you!

1 Like

Hi @BIN_HU

Do you know why there are 1000 points in the 1d result but 2000 in the 2d result? It seems to me this could be a clue as to why they are different.

Also how did you generate your forward signal? Richardson Lucy is used to solve for the original signal, when the input signal is a result of original convolved with a point spread function. To do a simulation

a) create simulated original signal of length n
b) create point spread function of length m
c) convolve original signal with point spread function (optionally add noise)
d) deconvolve result of c) with point spread function and confirm that you get the original back.

Note, usually you cannot recover the original exactly because some information is lost when doing the convolution.

Some care has to be taken depending on the size of n and m. Often times the signal is extended at the edges to avoid artifacts. This depends on the library. Some implementations of RL extend under the hood, others require the caller to extend the signal as a preprocessing step.

As other mentioned it will be easier to give useful advice if you provide a code example showing the following steps

a) how you generate the original
b) how you generate the PSF
c) show you did preprocessing of original and convolution with PSF (to create simulated signal)
d) how you call RIchardson Lucy to deconvolve above

Brian

Thank you all for the response, below are my code:

import numpy as np
import matplotlib.pyplot as plt

Gamma = 0.1
Delta=1
def rho(E):
    return np.real(np.abs(E - 1j*Gamma)/np.sqrt((E-1j*Gamma)**2- Delta**2))

x = np.linspace(-10,10,1000)
raw_data = rho(x)

def dekernel(x):
    B = 0.04
    return 0.25/B/((np.cosh(x/(2.0*B)))**2)

kernel = dekernel(np.linspace(-10,10,1000))

rdimg = np.tile(raw_data,(raw_data.shape[0],1))
kernelimg = np.tile(kernel,(kernel.shape[0],1))

from skimage import restoration
deconv1d = restoration.richardson_lucy(raw_data, kernel,iterations=200)
deconv2d = restoration.richardson_lucy(rdimg, kernelimg,iterations=200)

plt.figure()
plt.subplot(121)
plt.plot(deconv1d)
plt.title('input 1d array')
plt.subplot(122)
plt.plot(deconv2d[0,:])
plt.title('input 2d array')

here, the rho is a function to produce a gap-like spectrum, and the dekernel function is a sharp peak derivatized by the Dirac-Fermi function in physics. At this content, I want to remove the dekernel function influence by deconvoluting my rho function with dekernel function. Set 1d and 2d array as input, the results are shown below:


for subplot-1, the result seems not to work with the data, especially for the shoulder of my rho function. However, the 2d array works reasonably.

In the former simulation, 2000 points are obtained by add padding on two sides(500 of each) of my original data. The padding is set as the same value in raw_data[0] and raw_data[-1].

Thank you for your response! Please look my code below.

It has something to do with normalization. If you use clip=False, you’ll see that the 1D and 2D give back very similar shapes, but the 2D shape has a tiny range. I suspect this is because sum(kernel) differs by several orders of magnitude. However, even when I normalized both kernels and images, I still saw different magnitude outputs (but much closer).

1 Like

Thanks for the suggestion for using clip=False. Do you think increasing the number of iterations can improve this issue? I’m trying to do this.

Meanwhile, the outputs exhibit an unsmooth feature. Especially for the case that real experimental always harbour the noise as for input. Can you give some suggestions to make the results more smooth? Thank you a lot!

I don’t think the number of iterations matter so much; I found the same results with 10 iterations.

Unfortunately, I don’t know the algorithm well, so I cannot suggest a way to smoothen the output.

If you are able to search for your answer through regression, then you can usually specify a penalty on the outcome quite easily.

Thank you for the advice, I’ll try the regression method. Thank you again!