CARE 2D Denoising Artifacts

Hi all,

We’ve been playing around with CARE 2D Denoising and building protocols for acquisitions and training for all our microscopes. We’ve manged to train a model with very crappy input data
CARE001 - R03-C04-F03.tif (8.9 MB)

And ground truth
CARE001 - R03-C04-F03.tif (8.9 MB)

The training is made on patches os 256x256 pixels (which include background, otherwise the model ends up ‘dreaming up’ nuclei) and using the following CARE setup:
image

The training completes rather well with these settings

However, running the FIJI plugin for applying the network yields

Notice the “Holes” in the nuclei. Below is the ground truth


I would be glad for anyone giving me some insight or hints as to what I could do to fix this issue or where to look.

Just FYI, on these images, 256x256 patches look like this
image image

Long post, and thank you all for your time!

5 Likes

Hi @oburri,

Thank you for trying things out! I only saw your post today, sorry about that.

I spend a while debugging your issue and think the problem is due to the combination of two factors:

  1. very noisy input images, where the interior of nuclei sort of looks “the same” as a background region
  2. quite large nuclei that are bigger than the receptive field (“viewing radius”) of the neural network

Hence, the CARE network thinks the inside of some of the larger nuclei is background (just with a higher average value) and predicts a restored intensity close to 0.

I was able to overcome this problem by downsampling your images and using a CARE network with an increased receptive field (setting unet_n_depth = 3). The solution is not ideal, it would be best to customize the neural network architecture to have a large receptive field, but without increasing the number of parameters substantially. Unfortunately, I don’t have a better recommendation for you at the moment.

care

(which include background, otherwise the model ends up ‘dreaming up’ nuclei)

Can you give me more details? I used the standard data generation (as used in all the example notebooks) without such problems.

using the following CARE setup:

  • Are you using a probabilistic model on purpose?
  • When using large training patches of 256x256 pixels, reducing train_batch_size to 16 or 8 should be fine and speed up training.

Best,
Uwe

6 Likes

Hi @uschmidt83,

thanks a lot for your answer!

“very noisy input images,”

we really wanted to push CARE :wink:
(the images are acquired with 1% illumination and 5ms VS 10% and 150ms)

quite large nuclei that are bigger than the receptive field (“viewing radius”) of the neural network

We changed the “unet_kern_size” to 5 (if I recall well the default is 3). We also tested 7 but it becomes soooo slow!
And I think we also tested unet_n_depth = 3 , but on the “not downscaled” images (was also slowing down a lot the processing)

Are you using a probabilistic model on purpose?

I can’t remember why we changed it to probabilistic, @oburri will know :smiley:

Thank you again for the feedback,

Romain

1 Like