I’ve just started playing with CARE (running examples as we speak) and was wondering if you essentially need to image your sample twice? Once with high-res for your ground truth (presumably only a small subsection) and then with low res? And then do it for every other sample (because different orientations, and different scenes would not work robustly). Is this correct?
I was surprised how long the training takes even on small data (currently running the 2D denoising example on a 1080Ti GPU), so I am not sure if it is practical do perform this on large 3D data (mouse brains images with light-sheet). Particularly if each sample needs to be trained “on itself” (not this is not a critique, I assume all ML approaches have this issue, and I’m just trying to understand as I am new to ML).