DeepImageJ: problem with padding size when using my own model

Dear all,

Inspired from neubias meeting last week I started to play with DeepimageJ. My aim is to deploy models
trained with YAPiC to Fiji. In the long run, I would like to have a function in yapic to export trained models easily to Fiji. Using CSBdeep Fiji Plugin would also be an option. I also tried it, but with DeepImageJ I made more progress so far…

I managed to export a model trained with YAPiC to Tensorflow SavedModel and open it withDeepImageJ. However, there are problems with padding size.

Bundling my model with DeepImageJ>> DeepImageJ Build Bundled Model works fine:

Input and output size is correctly recognized.

Then I set padding to 214, which is the difference of input and output size:

Next, a test prediction is performed on the test image, this works as expected. The image is tiled and processed:

Until it is finished:

Now, I wand to apply my bundled model by DeepImageJ>> DeepImageJ Run

Screenshot from 2020-03-13 13-09-44

However, I get an error, because the padding value is not accepted:
Screenshot from 2020-03-13 13-09-58

This is strange, because it works without problems in the test run (executed during the Build Bundled Model workflow).

If I set the padding value lower, it is accepted, but tiling does not work correctly any more. Any idea how to solve that?

2 Likes

Dear all,

I could fix the problem by changing the network shape, i.e. by using option padding='same' in Keras model definition. Luckily resizing is also possible for networks that are already trained.

I will provide an option in YAPiC soon to deploy models to DeepimagJ.

1 Like

Hi @cmohl2013

Sorry, we kept this restriction during DeepImageJ Run but we need to remove it! Thank you!

Happy to hear that you solved the problem :slightly_smiling_face: However, I’m not sure about the padding value in the first network. Let me explain it:

  • The first network you trained with input size 572x572x3 and output size 358x358x3, probably had all the convolutional layers as padding='valid', which means “no padding”, i.e. the network did not add any padding during the processing. If that is the case, when you were building a bundled model, you could set padding to 0.

  • For the second network, as you used padding='same', the output of your new network has size 572x572x3, but the true (valid) information is located in the central part of the output with size 358x358x3. This is also called the receptive field of the network. Hence, in this case you need to set a padding of 107 pixels that need to be removed from each side of the output (572-2*107=358).

If I set the padding value lower, it is accepted, but tiling does not work correctly any more. Any idea how to solve that?

Do you mean that you can perceive artifacts between tiles in the output image?

Luckily resizing is also possible for networks that are already trained.

Do you mean to resize from size 358x358x3 to 572x572x3? Then you would change the pixel size of the output…

I will provide an option in YAPiC soon to deploy models to DeepimagJ.

Great!

Hi @cmohl2013,

I successfully used yapic (with Google Colaboratory and renku )

I’m very interested in such a feature and I’m looking forward to the new release!

Cheers,

Romain

Dear @esgomezm
thank you for your detailed answer!

Sorry, we kept this restriction during DeepImageJ Run but we need to remove it! Thank you!

This would be very helpful! :+1:

The first network you trained with input size 572x572x3 and output size 358x358x3, probably had all the convolutional layers as padding='valid' , which means “no padding”, i.e. the network did not add any padding during the processing.

This is correct.

If that is the case, when you were building a bundled model, you could set padding to 0.

If I do that, I get tiling artifacts:


The tiles have a distance of exactly 214, which is the difference between input and output shape.

For the second network, as you used padding='same' , the output of your new network has size 572x572x3, but the true (valid) information is located in the central part of the output with size 358x358x3. This is also called the receptive field of the network. Hence, in this case you need to set a padding of 107 pixels that need to be removed from each side of the output (572-2*107=358).

Ok, this should work for the network with padding='same', thank you!

Do you mean that you can perceive artifacts between tiles in the output image?

Yes, see image above.

Do you mean to resize from size 358x358x3 to 572x572x3? Then you would change the pixel size of the output…

No, I just mean changing padding='valid' to padding='same', as you mentioned above. No upscaling.

1 Like

Dear @romainGuiet, thank you for providing this positive feedback!

I successfully used yapic (with Google Colaboratory and renku )

Very interesting.

Hi @cmohl2013

could you please tell me which release of DeepImageJ are you using? Did you download it directly from the web page (https://deepimagej.github.io/deepimagej/) or from GitHub’s releases (https://github.com/deepimagej/deepimagej-plugin/releases)?

Hi @esgomezm

I downloaded it from the web page (version in GUI says 1.0.1)

Hi @cmohl2013

Ok, we found the error! Thank you! The problem is that the current version assumes that the output of the network has the same size as the input :woman_facepalming: we will correct it in the following days. I’ll write back as soon as we have it.

Dear @esgomezm

great, thank you for investigating this!
Ok, that’s why it works with models where I keep input and output shape identical. Once you have a new version I am happy to test it. Just let me know.

1 Like

Hi @esgomezm

the YAPiC function for deploying 2D models to DeepImagJ version 1.0.1 is almost in place and we will release ca. next week, together with some other updates.

The user can convert yapic trained 2D U-Net models with a simple command to a ready to use DeepImageJ bundled model (including the xml file, example images and the preprocessing macro).

yapic deploy deepimagej path/to/my/yapic_model.h5 path/to/my/bundled/deepimagej_model

Maybe you would like to mention this in the neubias webinar tomorrow. :wink:

Best
Christoph

Hi @cmohl2013

This sounds really good! I’m playing a bit with YAPIC and it’s really nice and easy! congrats!
It seems that the function to deploy DeepImageJ models isn’t released yet. Do you know when it will be available? Also, for which version of TensorFlow would this function work? At the moment the Java API for TF is released until the 1.15 so DeepImageJ isn’t ready to work with TF2 models.

Among all the changes we are including for the new release of DeepImageJ it might be of your interest the integration of the configuration file defined in the Bioimage model zoo. The information is similar to the one in the config.xml.

Esti

@esgomezm

To deploy deepimagej models, YAPIC has to run with TF version 1.13. I tested various TF versions. For that reason, I removed tensorflow dependency from the yapic package. In the new version, one has to install tensorflow separately.

Yes, the new YAPIC version has not been released, yet. In principle, deepimagej deployment works, but some documentation and the command line interface has to be still implemented. I’ll keep yo updated.

Will the old xml format still be supported in the new deepimagej release? Do you already know when will you release it?

Best
Christoph