Isotropic CARE - PSF questions

Hi all, (@

I am trying to wrap my head around the Isotropic Reconstruction Example at

Namely this cell

anisotropic_transform = anisotropic_distortions (
    subsample = 10.2,
    psf       = np.ones((3,3))/9, # use the actual PSF here
    psf_axes  = 'YX',

I have a hard time understanding what the PSF here is supposed to be…

My understanding

The idea is that, under the assumption that the data would look the same if we were to observe it in XY, XZ or any arbitrary slice, we use the 2D XY plane (with a good sampling) to recover what is lost in the Z plane (where there is anisotropy)

So to do that, we take the XY planes which we use as ground truth, and generate XZ planes by applying an anisotropic distortion to it, so that this serves as raw data.

We define the anisotropy factor, which represents the difference in XZ versus Z sampling, :ok:
We give it a PSF :no_entry:

I have a hard time understanding the shape of the PSF we have to give it.

  • Is it an actual PSF in XYZ? The example here shows a 2D PSF which I assume is along the XY axes, which will then be ‘stretched’ to compute the simulated raw data?
  • The PSF shape can be rather different in the axial direction, so how should I take it?
  • How should I normalize the PSF? Sum of intensities equal to 1? Max intensity=1?
  • The example shows a PSF of size 3x3. What is a good size to use, as this is extremely small.
  • Is the PSF we are giving it isotropic in XYZ? (or XZ?) if so, should it be the same voxel (pixel) size as the original image?

This is a bit of a start of my questions. Any help would be appreciated



1 Like

This is an interesting example. Here are some thoughts on the PSF related questions

I’d obtain a measured (or create a theoretical diffraction based) 3D PSF, and extract the central XZ slice to get the 2D PSF. .

In convolution/deconvolution PSF should be normalized to have a total sum of intensities equal to 1 in order to preserve the photon count. I suspect this may not be as important in CARE, because the scale factor to preserve total intensity can just be learned from the data.

In deconvolution PSF size should be large enough that the signal has decayed to background level by the edges (will be different for different modalities and systems)

I’m not sure how the transform works internally, ie what is the order of any convolutions and sub-sampling steps that may be performed. The spacings of the Image and PSF need to be consistent for the convolution to be valid. Hopefully someone else more familiar with the code can explain this step.


Hi @oburri,

Yep, your intuitions are correct. In the isotropic reconstruction the lateral (XY) views are assumed to be representative of the isotropic ground-truth (i.e. the image with isotropic lateral resolution). The additional resolution loss of the axial (XZ/YZ) views is then dominantly caused by the subsampling and the additional (compared to the isotropic) axial contribution of the PSF. The later is what has to be given as a 2D array in the code and we typically compute it by deconvolving the true overall PSF with its isotropic part and taking the central XZ slice (i.e. which additional axial blur has to be added to an isotropic blur to yield the overall PSF). This is a good approximation, if the overall PSF is fairly compact (as e.g. for lightsheet, but not for widefield). This is why, in general, we found the whole pipeline works best, when the subsampling effect is the dominating part, as then the implicitly 2D forward model is more accurate. More details can be found in the following paper [1].

As the subsampling factor (10.2) in the notebook example dominates the axial resolution loss, we simply set the additional axial PSF (psf_axes) to a small, normalized box kernel (i.e. basically ignoring the additional axial PSF blur).

The shape should be given as xz, where the last dimension is the axial (i.e. elongated) part.

Yes, the PSF should be normalized to sum to 1 (i.e. preserve the average intensity of the image, as @bnorthan nicely explained).

As noted above, we simply chose a very small PSF since the axial resolution loss in the example was dominated by the subsampling.

Yes, it should be same pixel size as the lateral pixel size (XY).

NB: The PSF deconvolution step described above will at one point make its way to the codebase (its on the long list of things), so it hopefully should become easier in the future.

Hope that helps a bit!



Dears @bnorthan and @mweigert,

Thank you so much for the extra details, help and time spent on this reply.

I look forward to trying this out and report back .