Hi, I am trying to apply some CSBDeep reconstructions to enhance live cell imaging at extreme low light conditions (cells must survive 48h).
I have 3 questions.
Images are low light, 1024x1024, 220nm per pixel, 23z-stacks at 0.5um distance. taken with gained EMCCD.
1. When to apply deconvolution on top of it.
We usually use deconvolution as these are 3D stacks (lucy-richarsdson with Nikon package or flowdec).
Should I do
a - raw input (low light) → Ground truth (high quality more light) before deconvolution to train and then deconvolve the CSBdeep result.
b- raw input deconvolved (low light) → ground truth high quality deconvolved to train and then just CSBdeep the deconvolved at low light?
c- raw input (low light) → ground truth deconvolved to train. and then have CSBDeep enhancement and “deconvolution” through the enhancement?
2- Different cameras at different pixel size
We acquire the live cell with EMCCD pixel size is 0.220um but we have a Back illuminated sCMOS that does 110nm pixel sizes (2048x2048).
Can I use the sCMOS for ground truth?
a. Should I use the class [
csbdeep.data.Resizer ] (Model application — CSBDeep 0.6.1 documentation)
On the EMCCD @220nm per pixel to make it 2048x2048?
b, can I capture the Z-stack with 0.3um spacing for the ground truth and resize the same way?
3- is more data diversity better or worse?
We have viruses that we image before fusion (high signal intensity), after fusion, etc. Viruses should be visible for 8hours and at each step different intensities should be seen at different time points.
a. should I reconstruct 0h to 15mins with those ground truths alone?
b. should I mix images for the whole 48hours (when some do not have viruses at all?)
thanks so much for these greats tools. I would not dream this would be possible 5 years ago!