I tested the Content-aware image restoration (denoising) using the Jupyter notebooks. I have absolute no experience doing this sort of things and virtually no knowledge related to this sort of things. So I have a few question below. Apologies if they are dumb or if the answer is in the paper.
Is there any reason not to use the Jupyter notebook with my own data? Basically I replaced the example data with my own in order to run it.
When doing 3D denoising is the information in the slices above and below a given slice used to reconstruct the 3D stack or are the z slices restored individually?
During the data generation should be the patch size adapted to the size of my objects (the nuclei) of interest?
o This is my input and GT. I used 36 images like that for training. Images taken on a Zeiss 880 Airyscan 0.01% laser power for input and 2% for GT, same speed of acquisition.
And these are the patches I got with the default settings. I used the default patch_size = (64,64),
n_patches_per_image = 1024,
Should the patches be larger ?
In the training I see that one may redefine the configuration:
o I see how to change the number of steps per epoch. But how do I change the learning rate for example? Is it not possible in the Jupyter notebook ?
o What are the epoch anyway? Is it the more the better? Is there a limit?
Is there an explanation of what all the information in Tensorboard means?
o These are the final plots I had:
o Is it good? Is it bad? How do I know?
I have two Titan GPUs but I see that only one is in use during the training. Is this normal or does this mean that there is something wrong with my setup? Or is this a limitation because of the Jupyter notebook?
I see in the CSBDeep in Fiji – Installation wiki that there is a section for multiple GPU support but it is for Linux only. Is it because you don’t need to do anything special to have multiple GPU working on FIJI in windows or is it because multiple GPU in FIJI will only work on Linux?
I installed both the plugin and the TensorFlow GPU native libraries but apparently it can’t load TensorFlow. What am I missing?
This is an example of output I had. GT is 2% laser power. The input is 0.01 or 0.04 % laser power. Trainning was done with 36 images like shown in point 3 and with 400 steps per epoch (and all the other defualt settings).
o As I don’t see a major difference between both network outputs, is it correct that I can continue decreasing laser power without compromising the image restoration?
Thank you for your help and I hope that this questions will as well help other unexperienced user like me.
In any case I am very pleased with the network output and the whole Jupyter notebook experience.