Questions about csbDeep ground truth and input

Hi, I am trying to apply some CSBDeep reconstructions to enhance live cell imaging at extreme low light conditions (cells must survive 48h).

I have 3 questions.
Images are low light, 1024x1024, 220nm per pixel, 23z-stacks at 0.5um distance. taken with gained EMCCD.

1. When to apply deconvolution on top of it.

We usually use deconvolution as these are 3D stacks (lucy-richarsdson with Nikon package or flowdec).
Should I do
a - raw input (low light) → Ground truth (high quality more light) before deconvolution to train and then deconvolve the CSBdeep result.
b- raw input deconvolved (low light) → ground truth high quality deconvolved to train and then just CSBdeep the deconvolved at low light?
c- raw input (low light) → ground truth deconvolved to train. and then have CSBDeep enhancement and “deconvolution” through the enhancement?

2- Different cameras at different pixel size
We acquire the live cell with EMCCD pixel size is 0.220um but we have a Back illuminated sCMOS that does 110nm pixel sizes (2048x2048).

Can I use the sCMOS for ground truth?
a. Should I use the class [ csbdeep.data.Resizer ] (Model application — CSBDeep 0.6.1 documentation)
On the EMCCD @220nm per pixel to make it 2048x2048?
b, can I capture the Z-stack with 0.3um spacing for the ground truth and resize the same way?

3- is more data diversity better or worse?
We have viruses that we image before fusion (high signal intensity), after fusion, etc. Viruses should be visible for 8hours and at each step different intensities should be seen at different time points.

a. should I reconstruct 0h to 15mins with those ground truths alone?
b. should I mix images for the whole 48hours (when some do not have viruses at all?)

thanks so much for these greats tools. I would not dream this would be possible 5 years ago!

Hi @jmamede

sorry for the late reply.

As typically deconvolution does assume a certain noise model (which would be changed by any denoising), I would directly go for c).

Yes, you could do both.

Yes, you should have as many different timepoints as possible present in the training data (b).

Hope that help,
M

1 Like

Sorry for jumping in here, but I was surprised by the answer and I just want to be sure I understand the recommendations given here. First, @jmamede do I correctly understand that you want to train with data from a sCMOS camera and then predict on data from an EMCCD? @mweigert if that’s the case do you really recommend to do that? My understanding (also from the docs) was that one should use the same acquisition setup for training and prediction. Second, it was also my understanding that some methods from CSBdeep like n2v are not well suited to sCMOS data which have a lot of structured noise. Is that not true anymore?
Thanks,
Guillaume

You are correct, that the acquisition modalities (camera etc) that are used for producing the input images have to be the same for training and prediction. Maybe I misunderstood @jmamede there, as I was assuming that there are EMCCD 220nm low SNR input images and corresponding sCMOS 110nm high SNR ground-truth images (of the same regions) available. If that is not the case, then indeed one should not use a model purely trained on sCMOS images on EMCCD images. (The second part b was referring to the upsampling use case here)

Yes afair that still holds true (note that there are more recent approaches like structn2v and ppn2v that might work better in this case).

1 Like

Thanks for the replies.

About the first point:

I tried a,b,c. Indeed it does a wonderful job in deconvolving and predict/enhance at the same time. it’s slower than flowdec deconvolution but it’s really worth it.
I also tried deconvolved low → deconvoled as ground truth, I saw no improvement from option c.

  • So far I only did EMCCD high gain low light and trained with higher light lower gain basically same pixel size (and same Z step)

I will try the upscaling with EMCCD → sCMOS to train when I have a bit more time for it, basically I need to find a good way to register the cameras, I found some python code @talleylambert that I’ll try for that.
I have it in ImageJ with NanoJ-Core, but I have no idea how to macro a plugin call in ImageJ.

I made a notebook that uses the trained Care models to do this with timelapse images from an .nd2 file to an ome.tiff with associated metadata. If someone wants it let me know. I can adapt for pims/bioformats fast as I had it for flowdec.

this is what is my plan.

Hi @jmamede I am curious why applying the NN would be slower than RL. How many iterations of RL do you use? One of the things on my todo list is to train NNs to replicate RL and then (assuming they can learn RL for given PSF and instrument parameters) profile the speed. I always thought one application of a trained NN would be faster than a few hundred iterations of RL. Have not yet profiled this though.

Are your images, PSFs and networks available to test with? I have been working an experimental RL implementation for CLIJ. It has an RL implementation extended with total variation regularization, so may handle noise better then classical RL. I’d be interested to test and compare to csbDeep.

Brian

20 iterations with flowdec usually. it’s the standard in most commercial deconvolution packages with auto-stop detect.

Something else, is that I couldn’t use the CSBdeep networks to be channel independent when training with 3 channels, so I need 3 models each time (for three colors). I don-t know if that forced it to use more system RAM instead of VRAM.
Basically, My viruses are red and green and in the model with 3 colors…the viruses start to have far red (lamin signal).
If there’s a way to tell csbDeep to not train accross channels, I’m all ears.

I can send you my images if you’d like. I’ll send you my e-mail in private.