CARE deeplearning on 3D

Dear all,
@fjug, @uschmidt83,

I just start to learn python (maybe days…) and thank you for your wiki, your scripts and all the explanation!!!
It’s a pleasure learn how to code with that.
I tried CARE and it work very well. Even with only few pictures (5 GT, 5 low in total) and a small training 40 epoch (+/- 1h).

Could you tell me if this is correct:

  • First I cut my 512 x 512 pictures in 4, then I rotate the files and flip horizontaly to increase the total number of files.
    then I trained the model 40 epoch but it should be 400.

  • Should I train on 512x512 pixels or smaller pictures?
  • How do you use CARE with 3D images ? (I only try 2D)
  • Could I trained in 2D and then apply in 3D?
  • Could I use big data from lightsheet microscope ? and How?

I can’t find a way to use CSBDEEP with Fiji
Kind regards,

java.lang.IllegalArgumentException: NodeDef mentions attr 'allowed_devices' not in Op<name=VarHandleOp; signature= -> resource:resource; attr=container:string,default=""; attr=shared_name:string,default=""; attr=dtype:type; attr=shape:shape; is_stateful=true>; NodeDef: {{node up_level_0_no_0/bias}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
	 [[up_level_0_no_0/bias]]
	at org.tensorflow.SavedModelBundle.load(Native Method)
	at org.tensorflow.SavedModelBundle.access$000(SavedModelBundle.java:27)
	at org.tensorflow.SavedModelBundle$Loader.load(SavedModelBundle.java:32)
	at org.tensorflow.SavedModelBundle.load(SavedModelBundle.java:95)
	at net.imagej.tensorflow.CachedModelBundle.<init>(CachedModelBundle.java:44)
	at net.imagej.tensorflow.DefaultTensorFlowService.loadCachedModel(DefaultTensorFlowService.java:135)
	at de.csbdresden.csbdeep.network.model.tensorflow.TensorFlowNetwork.loadModel(TensorFlowNetwork.java:163)
	at de.csbdresden.csbdeep.network.model.DefaultNetwork.loadModel(DefaultNetwork.java:76)
	at de.csbdresden.csbdeep.network.DefaultModelLoader.loadNetwork(DefaultModelLoader.java:69)
	at de.csbdresden.csbdeep.network.DefaultModelLoader.run(DefaultModelLoader.java:48)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.tryToPrepareInputAndNetwork(GenericCoreNetwork.java:524)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.initiateModelIfNeeded(GenericCoreNetwork.java:303)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.mainThread(GenericCoreNetwork.java:445)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

2 Likes

Hi @Alex.h,

thanks for your post! Pinging @uschmidt83 @fjug @tibuch again for the first questions (we were all a bit busy with a workshop the last week).

I have not yet experienced this error you are posting from Fiji. If you have a download link for the zipped model, we could better investigate the issue (ping @tomburke-rse). Which TensorFlow version are you training on in Python? If you are using TF 2.x? This might cause issues, because we still use TF 1.15 in Java. You might have to wait for TF 2.x coming to the CSBDeep update site in Fiji or downgrade to TF 1.15 in Python.

Best,
Debo

1 Like

Not sure what “cutting” in 4 means, but adding rotated and flipped versions is good.

Yes, train longer to get a better model.

Probably doesn’t matter that much.

The same as you do in 2D. But adding rotated and flipped versions is more diffcult. Also, you typically don’t want to flip along the Z axis.

In principle yes, but it will likely lead to bad results.

The prediction function has a parameter n_tiles that will internally break the image into smaller tiles, denoise them, and stitch the results back together. Hence, you can predict on big images out of the box, as long as they fit in host memory (RAM).

1 Like

Dear @frauzufall,
thank you for your time, I used tensorflow 2… I will try again with TF1.15

1 Like

Deae @uschmidt83,

  • Could I use something to work with .nd2 or .czi files instead of tif.

  • Could I use differents resolutions in paired data. like 512x512 in input and 1024x1024 in output
    Thank you for your time.

Hi Alex,
regarding the file formats, you might want to look at those links:
nd2: https://imagej.nih.gov/ij/plugins/docs/ND2Reader.pdf
czi: Opening multiscene CZI files with FIJI

Maybe my message was not so clear. Can I use .czi, .nd2 etc with python3 ?

My bad.
With regards to python3 then:
There is a python wrapper for bioformats: https://pythonhosted.org/python-bioformats/
This supports both types of file.
Then, there is the nd2reader, a pure python package with support for >=3.5: https://github.com/rbnvrw/nd2reader
And for czi, you could look into this: https://pypi.org/project/czifile/

Hope this helps :slight_smile:

1 Like

I do remember seeing this before now. It looks like this issue. If I recall correctly, everything was working in TF 2.3.0, but the 2.3.1 release broke the model export from Python if the model is then used in Fiji.

Workaround for now is to export the model with TF 1.15. Training can be in TF 1.x or 2.x.

1 Like

In principle yes, as @tomburke-rse mentioned. But I wouldn’t recommend it. At least convert your training data to tiff and save yourself the trouble.

Yes, check the UpsamplingCARE example.

1 Like