CARE (content-aware image restoration). Data acquisition for training

I’ve just started playing with CARE (running examples as we speak) and was wondering if you essentially need to image your sample twice? Once with high-res for your ground truth (presumably only a small subsection) and then with low res? And then do it for every other sample (because different orientations, and different scenes would not work robustly). Is this correct?

I was surprised how long the training takes even on small data (currently running the 2D denoising example on a 1080Ti GPU), so I am not sure if it is practical do perform this on large 3D data (mouse brains images with light-sheet). Particularly if each sample needs to be trained “on itself” (not this is not a critique, I assume all ML approaches have this issue, and I’m just trying to understand as I am new to ML).


I’m sure someone else will have a better explanation, but here goes …

The general idea of this approach is that you need to generate training data (either experimentally or by simulation) which consists of matched high and low quality (SNR/resolution) images. Your network is trained on these images (which will take a long time). You can then apply your network to only the low quality data in future, which will be much quicker.

Essentially, training your network will be a fair amount of work, but then using it for all your other samples (if acquired in the same way) will be much easier.

N.B. if the training on the sample dataset is taking a long time, check that tensorflow is using the GPU by running:

import tensorflow as tf
1 Like

Thank you for your reply. I’m worried that shadows falling differently between different samples might cause issue (but I have to try and see - worries are not facts!).

Thanks for the gpu checking tip, I monitor it otherwise (nvidia-smi), and it’s definitely being used to its full capacity. To be fair, I was using 400 steps on the denoising2D example.

I assume this will rely on having sufficient, representative training data. CARE will only process large images in patches, so as long as you have bright and “shadowy” patches, it should work. If I remember correctly, the original paper showed some stitching artifacts that CARE dealt with nicely.

Yes, it did do that nicely. It wasn’t clear to me how they managed to deal with that (that is, how did they obtain “ground truth” and “low snr” images with the “shadowy patches” as in a mosaic acquisition).


what worked for me quite conveniently is to create a 2-channel mosaic acquisition, where the 1st channel was using really low exposure time, while the 2nd channel was identical but quite high exposure

In the end you have a 2-Channel image with low and high SNR, which you can split to your needs in separate channel and smaller tiles. And this can be used for CARE

Was this your question or did I misunderstood it?

Sorry, I missed your reply.

So you managed to get rid of the “overlap shadows” even if you trained your network with a mosaic data (which presumably still had shadows at the edges even at high exposure)?

Why not use many single position without any overlap?