Error in Stardist Fiji Plugin

Hi,

I am getting an error message when trying to use stardist on 8000 by 8000 16-bit tiff image when using it as a Fiji plugin with the versatile model. The error is attached below and I tried going from n_tiles = 1 to 640 and that just makes the error come up faster but does not solve the issue.

On a side note, I am training a model on super big cells labelled with actin and the versatile model does not work on them, when the model is done training can we ship it with the plugin or?

[INFO] Using default TensorFlow version from JAR: TF 1.12.0 CPU
[INFO] Loading TensorFlow model GenericNetwork_aea3be563cb56b8824f53a8c2382aaa5 from source file file:/var/folders/36/p232bry50m1f42ncbjk74j_c0000gn/T/stardist_model_2070489926113220062.zip
[INFO] Shape of input tensor: [-1, -1, -1, 1]
[INFO] Shape of output tensor: [-1, -1, -1, 33]
[INFO] Normalize .. 
[INFO] Dataset type: 32-bit signed float, converting to FloatType.
[INFO] Dataset dimensions: [7128, 4968]
[INFO] INPUT NODE: 
[INFO] Mapping of tensor input: 
[INFO]    datasetAxes:[X, Y]
[INFO]    nodeAxes:[(Time, -1), (Y, -1), (X, -1), (Channel, 1)]
[INFO]    mapping:[2, 1, 0, 3]
[INFO] OUTPUT NODE: 
[INFO] Mapping of tensor output: 
[INFO]    datasetAxes:[X, Y]
[INFO]    nodeAxes:[(Time, -1), (Y, -1), (X, -1), (Channel, 33)]
[INFO]    mapping:[2, 1, 0, 3]
[INFO] Complete input axes: [X, Y, Time, Channel]
[INFO] Tiling actions: [TILE_WITH_PADDING, TILE_WITH_PADDING, TILE_WITHOUT_PADDING, NO_TILING]
[INFO] Dividing image into 651 tile(s)..
[INFO] Size of single image tile: [240, 240, 1, 1]
[INFO] Final image tiling: [31, 21, 1, 1]
[INFO] Network input size: [428, 428, 1, 1]
[INFO] Processing tile 1..
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: ConcatOp : Dimensions of inputs should match: shape[0] = [1,52,52,128] vs. shape[1] = [1,53,53,128]
	 [[{{node concatenate_1/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _output_shapes=[[?,?,?,256]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](up_sampling2d_1/ResizeNearestNeighbor, down_level_2_no_1/Relu, concatenate_1/concat/axis)]]
	at java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1006)
	at de.csbdresden.csbdeep.network.DefaultModelExecutor.run(DefaultModelExecutor.java:82)
	at de.csbdresden.csbdeep.network.DefaultModelExecutor.run(DefaultModelExecutor.java:43)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.tileAndRunNetwork(GenericCoreNetwork.java:593)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.tryToTileAndRunNetwork(GenericCoreNetwork.java:566)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.mainThread(GenericCoreNetwork.java:468)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: ConcatOp : Dimensions of inputs should match: shape[0] = [1,52,52,128] vs. shape[1] = [1,53,53,128]
	 [[{{node concatenate_1/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _output_shapes=[[?,?,?,256]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](up_sampling2d_1/ResizeNearestNeighbor, down_level_2_no_1/Relu, concatenate_1/concat/axis)]]
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:593)
	at java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1005)
	... 10 more
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: ConcatOp : Dimensions of inputs should match: shape[0] = [1,52,52,128] vs. shape[1] = [1,53,53,128]
	 [[{{node concatenate_1/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _output_shapes=[[?,?,?,256]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](up_sampling2d_1/ResizeNearestNeighbor, down_level_2_no_1/Relu, concatenate_1/concat/axis)]]
	at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1431)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: ConcatOp : Dimensions of inputs should match: shape[0] = [1,52,52,128] vs. shape[1] = [1,53,53,128]
	 [[{{node concatenate_1/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _output_shapes=[[?,?,?,256]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](up_sampling2d_1/ResizeNearestNeighbor, down_level_2_no_1/Relu, concatenate_1/concat/axis)]]
	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
	at de.csbdresden.csbdeep.network.model.DefaultNetwork.call(DefaultNetwork.java:82)
	at de.csbdresden.csbdeep.network.model.DefaultNetwork.call(DefaultNetwork.java:23)
	at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424)
	... 4 more
Caused by: java.lang.IllegalArgumentException: ConcatOp : Dimensions of inputs should match: shape[0] = [1,52,52,128] vs. shape[1] = [1,53,53,128]
	 [[{{node concatenate_1/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _output_shapes=[[?,?,?,256]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](up_sampling2d_1/ResizeNearestNeighbor, down_level_2_no_1/Relu, concatenate_1/concat/axis)]]
	at org.tensorflow.Session.run(Native Method)
	at org.tensorflow.Session.access$100(Session.java:48)
	at org.tensorflow.Session$Runner.runHelper(Session.java:314)
	at org.tensorflow.Session$Runner.run(Session.java:264)
	at de.csbdresden.csbdeep.network.model.tensorflow.TensorFlowRunner.executeGraph(TensorFlowRunner.java:54)
	at de.csbdresden.csbdeep.network.model.tensorflow.TensorFlowNetwork.execute(TensorFlowNetwork.java:327)
	at de.csbdresden.csbdeep.network.model.DefaultNetwork.lambda$call$0(DefaultNetwork.java:74)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: ConcatOp : Dimensions of inputs should match: shape[0] = [1,52,52,128] vs. shape[1] = [1,53,53,128]
	 [[{{node concatenate_1/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _output_shapes=[[?,?,?,256]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](up_sampling2d_1/ResizeNearestNeighbor, down_level_2_no_1/Relu, concatenate_1/concat/axis)]]
	at java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1006)
	at de.csbdresden.csbdeep.network.DefaultModelExecutor.run(DefaultModelExecutor.java:82)
	at de.csbdresden.csbdeep.network.DefaultModelExecutor.run(DefaultModelExecutor.java:43)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.tileAndRunNetwork(GenericCoreNetwork.java:593)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.tryToTileAndRunNetwork(GenericCoreNetwork.java:566)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.mainThread(GenericCoreNetwork.java:468)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: ConcatOp : Dimensions of inputs should match: shape[0] = [1,52,52,128] vs. shape[1] = [1,53,53,128]
	 [[{{node concatenate_1/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _output_shapes=[[?,?,?,256]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](up_sampling2d_1/ResizeNearestNeighbor, down_level_2_no_1/Relu, concatenate_1/concat/axis)]]
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:593)
	at java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1005)
	... 10 more
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: ConcatOp : Dimensions of inputs should match: shape[0] = [1,52,52,128] vs. shape[1] = [1,53,53,128]
	 [[{{node concatenate_1/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _output_shapes=[[?,?,?,256]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](up_sampling2d_1/ResizeNearestNeighbor, down_level_2_no_1/Relu, concatenate_1/concat/axis)]]
	at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1431)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: ConcatOp : Dimensions of inputs should match: shape[0] = [1,52,52,128] vs. shape[1] = [1,53,53,128]
	 [[{{node concatenate_1/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _output_shapes=[[?,?,?,256]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](up_sampling2d_1/ResizeNearestNeighbor, down_level_2_no_1/Relu, concatenate_1/concat/axis)]]
	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
	at de.csbdresden.csbdeep.network.model.DefaultNetwork.call(DefaultNetwork.java:82)
	at de.csbdresden.csbdeep.network.model.DefaultNetwork.call(DefaultNetwork.java:23)
	at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424)
	... 4 more
Caused by: java.lang.IllegalArgumentException: ConcatOp : Dimensions of inputs should match: shape[0] = [1,52,52,128] vs. shape[1] = [1,53,53,128]
	 [[{{node concatenate_1/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _output_shapes=[[?,?,?,256]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](up_sampling2d_1/ResizeNearestNeighbor, down_level_2_no_1/Relu, concatenate_1/concat/axis)]]
	at org.tensorflow.Session.run(Native Method)
	at org.tensorflow.Session.access$100(Session.java:48)
	at org.tensorflow.Session$Runner.runHelper(Session.java:314)
	at org.tensorflow.Session$Runner.run(Session.java:264)
	at de.csbdresden.csbdeep.network.model.tensorflow.TensorFlowRunner.executeGraph(TensorFlowRunner.java:54)
	at de.csbdresden.csbdeep.network.model.tensorflow.TensorFlowNetwork.execute(TensorFlowNetwork.java:327)
	at de.csbdresden.csbdeep.network.model.DefaultNetwork.lambda$call$0(DefaultNetwork.java:74)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[INFO] CSBDeep plugin exit (took 2910 milliseconds)
[ERROR] Module threw exception
java.lang.NullPointerException
	at de.csbdresden.stardist.StarDist2D.splitPrediction(StarDist2D.java:320)
	at de.csbdresden.stardist.StarDist2D.run(StarDist2D.java:292)
	at org.scijava.command.CommandModule.run(CommandModule.java:199)
	at org.scijava.module.ModuleRunner.run(ModuleRunner.java:168)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:127)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:66)
	at org.scijava.thread.DefaultThreadService.lambda$wrap$2(DefaultThreadService.java:228)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
``

@kapoorlab

I believe @uschmidt83 and/or @mweigert are the folks who can help in this case.

I was just able to reproduce this and found what the problem is. Before we fix this in the next release, you can work around this issue by selecting Model (.zip from File) and using the versatile model (download as .zip (5.1 MB), don’t extract) under Advanced Options.

If only the size is the problem, have you tried downsampling the input images before running StarDist on them?

Yes, we just released the feature to export the trained model. You can then use it as explained above.

Code and basic documentation here (there’s no example yet):

Sorry for the late reply.

Best,
Uwe

1 Like

Hi Uwe, Thanks for your help. The model that I am training now for big cells uses patch size of 1024 and downsampling grid of 8 with distance threshold of 0.05. I am in the middle of testing the model, however I was using the export_TF function to save the zip file of the model but when loading in the Fiji plugin as a custom zip file I get a new exception:

[INFO] No TF library found in /Applications/Fiji.app/lib/.
[INFO] Using default TensorFlow version from JAR: TF 1.12.0 CPU
[INFO] Loading TensorFlow model GenericNetwork_c9c6b041e4f024518a40774440e45aaf from source file file:/Users/aimachine/Documents/TrainingOz/EmbryoOz1000StardistRay128ds8th0.05/TF_SavedModel.zip
[INFO] Caching TensorFlow models to /Applications/Fiji.app/models
[INFO] Unpacking variables/
[INFO] Unpacking saved_model.pb
[INFO] Unpacking variables/variables.data-00000-of-00001
[INFO] Unpacking variables/variables.index
java.lang.IllegalArgumentException: NodeDef mentions attr 'explicit_paddings' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]; attr=dilations:list(int),default=[1, 1, 1, 1]>; NodeDef: {{node conv2d_1/convolution}} = Conv2D[T=DT_FLOAT, _output_shapes=[[?,?,?,32]], data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](input, conv2d_1/convolution/ReadVariableOp). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
	at org.tensorflow.SavedModelBundle.load(Native Method)
	at org.tensorflow.SavedModelBundle.access$000(SavedModelBundle.java:27)
	at org.tensorflow.SavedModelBundle$Loader.load(SavedModelBundle.java:32)
	at org.tensorflow.SavedModelBundle.load(SavedModelBundle.java:95)
	at net.imagej.tensorflow.CachedModelBundle.<init>(CachedModelBundle.java:44)
	at net.imagej.tensorflow.DefaultTensorFlowService.loadCachedModel(DefaultTensorFlowService.java:135)
	at de.csbdresden.csbdeep.network.model.tensorflow.TensorFlowNetwork.loadModel(TensorFlowNetwork.java:135)
	at de.csbdresden.csbdeep.network.model.DefaultNetwork.loadModel(DefaultNetwork.java:48)
	at de.csbdresden.csbdeep.network.DefaultModelLoader.loadNetwork(DefaultModelLoader.java:41)
	at de.csbdresden.csbdeep.network.DefaultModelLoader.run(DefaultModelLoader.java:20)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.tryToPrepareInputAndNetwork(GenericCoreNetwork.java:523)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.initiateModelIfNeeded(GenericCoreNetwork.java:303)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.mainThread(GenericCoreNetwork.java:445)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[INFO] CSBDeep plugin exit (took 9938 milliseconds)
[ERROR] Module threw exception
java.lang.NullPointerException
	at de.csbdresden.stardist.StarDist2D.splitPrediction(StarDist2D.java:320)
	at de.csbdresden.stardist.StarDist2D.run(StarDist2D.java:265)
	at org.scijava.command.CommandModule.run(CommandModule.java:199)
	at org.scijava.module.ModuleRunner.run(ModuleRunner.java:168)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:127)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:66)
	at org.scijava.thread.DefaultThreadService.lambda$wrap$2(DefaultThreadService.java:228)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

The zip file of my model is also attached.Model Zip file

Regards,
Varun

?

I suspect that is caused by your TensorFlow in Python being newer than version 1.12 .0 (what’s used in your Fiji). Downgrade TensorFlow in Python, or upgrade in Fiji (Edit > Options > TensorFlow...).

Best,
Uwe

In the training parameters I set:
grid = (8,8),
train_loss_weights=(1, 0.05)
for low signal boundary. The weights for loss is mean is 0.05 in my training. Yeah I have TF-1.15.0 in python, I will create a virtual env with 1.12 and re-export the model.

Thanks a lot,
Regards,
Varun

1 Like

@uschmidt83 Hello, I’m just having this error and I don’t know why…
"
[ERROR] net.imagej.DefaultDataset: Input axis of type Channel should have size 1 but has size 3
[INFO] CSBDeep plugin exit (took 1377 milliseconds)
[ERROR] Module threw exception
java.lang.NullPointerException
at de.csbdresden.stardist.StarDist2D.splitPrediction(StarDist2D.java:320)
at de.csbdresden.stardist.StarDist2D.run(StarDist2D.java:292)
at org.scijava.command.CommandModule.run(CommandModule.java:199)
at org.scijava.module.ModuleRunner.run(ModuleRunner.java:168)
at org.scijava.module.ModuleRunner.call(ModuleRunner.java:127)
at org.scijava.module.ModuleRunner.call(ModuleRunner.java:66)
at org.scijava.thread.DefaultThreadService.lambda$wrap$2(DefaultThreadService.java:228)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
"
Thanks in advance!!

It is because your image has three channels and stardist is expecting just one channel as input. Try splitting your channels and then you can apply the model on single channel images and it would work.

1 Like


Thanks for your fast answer! The strange think is that I’m using already a split channel image, that I previously saved as TIFF (only using the blue, with the nucleus) and is giving me that error
Thanks.

Hi @nacherso,

I see in your screenshot that your image type is RGB, which results in 3 channels as input to StarDist. You need to convert your image type first, I think Image > Type > 8-bit will work for that.

Best,
Uwe

True, I didn’t notice. Now is working, thank you very much! :star_struck:

1 Like

We just released a new version of StarDist for ImageJ, where this problem should be fixed.

Best,
Uwe

2 Likes

Hi Uwe - I am getting a similar error when trying to load a saved model to the FIJI plugin. I do agree that this error seems to come up with tf version differences, so I matched tf versions between Python and FIJI…but I use 1.12 GPU for training and 1.12 CPU for FIJI (training on HPC cluster, FIJI on mac laptop). Do you think that is causing the problem?

I can successfully use my model for predictions in your sample jupyter notebook (I actually think the environment I was using at the time of running the notebook was tf 1.14, but had no errors there).

[INFO] Using native TensorFlow version: TF 1.12.0 CPU
[INFO] Loading TensorFlow model GenericNetwork_ac4e764b724c8b519cf1b8299eb790dd from source file file:/Users/erindiel/stardist/models/stardist_spot_20200430/TF_SavedModel.zip
[INFO] Caching TensorFlow models to /Applications/Fiji.app/models
java.lang.IllegalArgumentException: NodeDef mentions attr 'explicit_paddings' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]; attr=dilations:list(int),default=[1, 1, 1, 1]>; NodeDef: {{node conv2d_1/convolution}} = Conv2D[T=DT_FLOAT, _output_shapes=[[?,?,?,32]], data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](input, conv2d_1/kernel/read). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
	at org.tensorflow.SavedModelBundle.load(Native Method)
	at org.tensorflow.SavedModelBundle.access$000(SavedModelBundle.java:27)
	at org.tensorflow.SavedModelBundle$Loader.load(SavedModelBundle.java:32)
	at org.tensorflow.SavedModelBundle.load(SavedModelBundle.java:95)
	at net.imagej.tensorflow.CachedModelBundle.<init>(CachedModelBundle.java:44)
	at net.imagej.tensorflow.DefaultTensorFlowService.loadCachedModel(DefaultTensorFlowService.java:135)
	at de.csbdresden.csbdeep.network.model.tensorflow.TensorFlowNetwork.loadModel(TensorFlowNetwork.java:135)
	at de.csbdresden.csbdeep.network.model.DefaultNetwork.loadModel(DefaultNetwork.java:48)
	at de.csbdresden.csbdeep.network.DefaultModelLoader.loadNetwork(DefaultModelLoader.java:41)
	at de.csbdresden.csbdeep.network.DefaultModelLoader.run(DefaultModelLoader.java:20)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.tryToPrepareInputAndNetwork(GenericCoreNetwork.java:523)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.initiateModelIfNeeded(GenericCoreNetwork.java:303)
	at de.csbdresden.csbdeep.commands.GenericCoreNetwork.mainThread(GenericCoreNetwork.java:445)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[INFO] CSBDeep plugin exit (took 236 milliseconds)
[ERROR] Module threw exception
java.lang.NullPointerException
	at de.csbdresden.stardist.StarDist2D.splitPrediction(StarDist2D.java:338)
	at de.csbdresden.stardist.StarDist2D.run(StarDist2D.java:307)
	at org.scijava.command.CommandModule.run(CommandModule.java:199)
	at org.scijava.module.ModuleRunner.run(ModuleRunner.java:168)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:127)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:66)
	at org.scijava.thread.DefaultThreadService.lambda$wrap$2(DefaultThreadService.java:228)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Ah, I should have been able to answer my own question based on what I just wrote…it seems that running tf 1.14 in FIJI worked, just like when I ran your sample jupyter notebook. Interesting considering I trained the model in tf 1.12. I tried a few other versions in FIJI, only 1.14 worked.

Regardless…it is working!

What really matters is the version of TensorFlow at the time when you run model.export_TF() from Python to export the model as a zip file to be used in Fiji.

I’m glad!

Best,
Uwe

Well that solves it! I exported once back on my local machine, running 1.14. Thanks so much for this clarification.