I am trying to train a CARE model to restore lattice light-sheet microscopy images of microtubule structures. As ground truth, I am generating synthetic images following the approach illustrated in Figure 4 of the original CARE paper and described on page 50 of the supplementary information.
Briefly, I generate 3D images with pixel-wide filamentous structures (based on microtubules segmented from real microscopy images), add low-frequency Perlin noise (to simulate auto-fluorescence), convolve the result with a measured PSF, and apply Poisson and Gaussian noise.
The problem is that the resulting images look considerably less blurred than the real microscopy images.
My first question is: are there any other sources of blur specific to the lattice light-sheet microscopy that are not captured by the measured PSF that I am potentially missing here?
The second question is: does anyone have any experience using synthetic data for training CARE deconvolution and successfully applying it to images that are subsequently used for quantification (not just illustration)? What are the potential pitfalls of this approach? Is it worth pursuing, or should I rather use high-quality real images as ground truth?
On a related note: after I have restored my images (using synthetic or real data for training CARE), how do I know that my restoration is good enough? I am looking for a method to quantify restoration artifacts, similar to the NanoJ-SQUIRREL Fiji plugin, but in 3D.
Thanks in advance for any ideas, thoughts, suggestions!