CARE Deep Learning Testing

Hi All

I spent some time this morning testing the new CARE deep learning plug in.

Some caveats

  1. The purpose of my testing is to try and understand how the deconvolution network for Microtubules is working, to try and understand how it would deal with artifacts (such as crosstalk and noise) and gain some experience with the plugin. The tests do not reflect how one should use it for real analysis. For real analysis you must take care that the network was trained on data similar to the data you are processing.

  2. I tested the plugin on all three channels of the Hela-Cells sample image. Even though only the Green channel contains microtubules. I was interested in how the network would perform on structures it wasn’t trained for.

  3. Even though the green channel of Hela-Cells contains microtubules, it was acquired with a different scope and settings than the training set so performance will not be optimal.

  4. The plugin currently does not output the “control” image. The control image contains information about uncertainty. A complete analysis would need to take into account the uncertainty of the results.

To start with I’ve been testing the “Deconvolution - Microtubules” and comparing the results to (relatively simple) Gaussian deconvolution.

I used the Hela-Cells sample image cropped using



I’ll present the results for each channel in separate replies.


Red Channel

The red channel contains small emitters. I don’t know for sure what these emitters are. I’ve worked on similar looking images, where the emitters were mitochondria, and researchers were interested in how the mitochondria were travelling through networks. In this case the emitters may be something different, I don’t know If anyone knows more about the biology of the image please feel free to comment.

Below is the original, Gaussian deconvolution, and CARE-microtubule-deconvolution for the red channel.

image image image

And a ROI from the image showing a cluster of emitters (original, decon, CARE-decon)

image image image

In this case the deconvolution does a better job of separating points. Although keep in mind the network wasn’t trained for points.

I asked the authors if they trained on beads or point like objects, and they indicated they had not, and they didn’t think it would be work well as the network needs to learn content. I personally think training on points would be an interesting experiment as it could have useful applications, in deconvolution (if could learn a non-linear-spatiatlly varying PSF and noise model) or super resolution (if it could how to distinguish tightly packed points).

1 Like

Here are results from the Green Channel which contains microtubules. Left to right, original, Gauss-decon, and Care-decon.

image image image

When we look closely at an ROI (imp.setRoi(186,61,28,41);) there are definitely structural differences between results.

image image image


And finally the blue channel. Again it wasn’t trained for this channel, and it is probably finding networks where there are none. This brings up an important question… Can the network be trained on images with many different types of structure, so that it generalizes better?? If you trained it for several different types of structure can you still get good results on specific structure?

image image image

image image image


I have not read the paper, but the problem that you put it is (to me) interesting. I suppose that the Gaussian deconvolution is attempting to get over the optical limits of the imaging system and the deep learning system is not, it is doing something else that explains the captured image based on the training set on given structures.
I guess that training using various types of structures is not what you want, as the original could be interpreted in various (perhaps equally valid) ways. In the example you gave, you already know where the tubules are, so a big chunk of the job in finding potential tubules is already resolved.

As a final test I created a circle, convolved it with a Gaussian and added Poisson noise. Obviously the network wasn’t trained on this type of image, but I still want to know it it will randomly find networks in an image that has none. The interesting thing was that the noisier the image, the less “false” networks it found. In the noiseless image there were some false netowrks around the perimeter, when I added noise there were still false networks but less, when I added even more noise, it found almost nothing. Was it trained on “negative” noise images, to prevent it from finding random networks in nosie??

Here are the original images

image image image

image image image

1 Like


Hi Gabriel, if you look closely at some sections of the image, the tubules are not resolved in the original. In the Gauss-deconvolved, and CARE-deconvolved you can see tubules that aren’t obvious in the original. And the Gauss-deconvolved and CARE-deconvolved show slightly different structure. In the original we don’t know for sure where the tubules are, and we need to do further image processing to find them.

btw - I think deep learning will eventually beat conventional deconvolution hands down… this is an important first step, and we have to start looking at the results really closely, and give feedback as to whether structure we see is accurate. Then the developers of deep learning will use this feedback to make the next round even better.

image image image

1 Like

My 2 cents.
Traditional deconv and machine learning perhaps address 2 overlapping problems. 1st is the effect if optics (OTF) and noise. 2nd is recovering structure that you expect to see (having trained a model for it)
My bet us actually traditional deconv first to deal mostly with 1. Then do machine learning approach. Image contrast restoration fixes the optical problem of Contrast getting worse as feature size approaches diffraction limit. Then machine learning to address the rest of the problem of still imperfect representation of the sample in the restored image



Two things I find unfortunate in your tests.

  1. You do what we explicitly ask people NOT to do. Using the network we’ve put online on either another sample or on data coming from another microscope is simply wrong. It should never be done. Why did you not compare on the example data we supply?
  2. You test only one of the many applications we show CARE can perform.

While the second point is only misleading for people who did not have a look at the paper, the first point is a sin! Not because it enables you to drive a faulty comparison home with a result you obviously prefer, but because you suggest to others that it is ok to run a network on random data. It is not and should never be done!!!

For these reasons I find your efforts not only superfluous but counterproductive and dangerous.




it appears as if the original poster is not familiar with how learning networks function.

This is the reason why I think machine learning is dangerous* if it isn’t used by people who know exactly what they are doing and what they can expect.
*Think of medical applications in which decisions are made whether you’re seriously ill or not …

In general I think that with mathematically founded approaches you’re on the safe side.




Now that sounds like a voice of reason.
Yes, the power it brings comes with responsibility.
We went through quite some effort to enable additional readouts that can help to see which parts of CARE reconstructions are uncertain.
This we did for precisely the reason you point out.

Hi @fjug and @anon96376101

First off, a bit about my background. I did my masters degree work on Neural Networks many years ago, so I am very aware of all the issues involved when training and applying neural networks.

I explicitly acknowledged the fact my testing was sub optimal in my original comment…

The main reason I tested on random images, was because I wanted to know whether the system would find microtubules where there are none. In real applications, it may be impossible to prevent artifacts, crosstalk and other problems from appearing in the application set, that were not in the training set, even when training and testing sets are tightly controlled. Thus it is important to understand how the network will perform on random structure.

I did not fully understand that the microtubule deconvolution would not be valid for microtubule images, not from the same scope. I suspected it may not work, and I acknowledged that in my original post.

I understand now that the networks are not to be used on images from other microscopes, even if the images are similar.

Anyway I am sorry if I insulted anyone. It was absolutely not my intention. I was, and still am very excited about this work. I have projects, for which it could potentially, be useful for and will continue to follow the progress.


Hi @bnorthan,
Interesting test you are running, thank you for sharing!
Can you confirm that your circle is solid (a disc) and not hollow (a ring)?
It looks solid, but with the right PSF to radius ratio everything will…

Hi @bnorthan,
my comments never intended to be personal. As you know, I am very well aware of your background and how important your contributions to our community are.
I think it would be great if your posts would make it more clear that CARE is trained on the combination of sample and microscope. Changing one of the two requires retraining. This is important to prevent people from doing the wrong thing.
The question about the effects of CARE when applied to unfit data is indeed very interesting. You can believe me, we played and tested a lot with such scenarious. I actually think it would be great to see someone make a bunch of tests in this regard. If so, I would wish that the corresponding paper or forum post would make it cristal clear that the aim is to explore boundaries beyond the intended way of using CARE.
I think your post is not optimal because it combines this way of pushing beyond the limit with comparing to deconvolution — a bit of a tough combination for a short and not very conclusive post.

With my words in mind… if you read your posts again… maybe you see what I‘m talking about and maybe you come to the conclusion that adding a few words of caution might actually prevent people from following your example and using CARE on their own data without the required training.



We trained on data with MUCH higher resolution. If you take this into account it becomes clear why CARE is not putting microtubles In the same places.
If your prior says something about the known size of microtubules (in pixels) you would not explain the image with structures that are way to tiny for the learned prior.
Do you see now why CARE depends very much on the microscopy setup the data comes from?

1 Like

Hi @fjug

The wiki page for Deconvolution (Microtubules) indicates you can use your own images. It may be a good idea to edit this section if users really shouldn’t be using their own data. If it is OK to use your own data, if it was acquired with a scope and settings similar to the training data, you should specify the type of scope and settings (wavelength, NA, spacing, etc.) that the training data was acquired on.

Download exemplary image data or use your own images for the following steps.
Start Fiji and open an image by choosing a file on your computer.
For this model, the image must be a 2D grayscale image or a timeseries of 2D grayscale images.

Also I edited my original post so the caveats are right at the top, and I added some additional information. Please let me know if I should reword or add anything.


Yes. The original was a circle, shown below.

I emphasize that the network was not trained for this type of shape, PSF or noise model. The purpose of my test was merely to understand how the network would deal with structure it had not seen.

Also, there is a “control” image that reflects the uncertainty of the result. The control image is not output by the plugin yet. When the network encounters artifacts, that were not in the training set, the uncertainty may be reflected in the control image.

1 Like

Very good suggestions. Will make changes as soon as I have a computer in front of me…

1 Like