Workflow failed on Biaflows sandbox

Dear all,

Following I2K tutorial, I am trying to run an Icy workflow on the Biaflows sandbox. The workflow is named “W_ML-i2k-IcyNucleiSeg” and hosted on this GitHub repository. Originally I forked the workflow example on Spot Detection.

As the workflow (v0.0.2) was failing, I did some modifications.
In the Dockerfile, I modified the section # Install Cytomine python client (changed version to v2.7.3) and the section # Install Neubias-W5-Utilities (changed version to v0.9.1 and replaced replace “neubiaswg5-utilities” by “biaflows-utilities”), except the last lines, refering to # custom version of imagecodecs.
I also modified the wrapper.py (replacement of neubiaswg5 by biaflows and “NeubiasJob” by “BiaflowsJob”).

The workflow (v0.0.3) is still failing and I would need some help fix it.

Here is the error log:

RuntimeError: module compiled against API version 0xc but this version of numpy is 0xb
RuntimeError: module compiled against API version 0xc but this version of numpy is 0xb
RuntimeError: module compiled against API version 0xc but this version of numpy is 0xb
RuntimeError: module compiled against API version 0xc but this version of numpy is 0xb
/usr/local/lib/python3.6/site-packages/tifffile/tifffile.py:8211: UserWarning: libopenblasp-r0-ae94cfde.3.9.dev.so: cannot open shared object file: No such file or directory
  Functionality might be degraded or be slow.

  warnings.warn('%s%s' % (e, warn))
RuntimeError: module compiled against API version 0xc but this version of numpy is 0xb
RuntimeError: module compiled against API version 0xc but this version of numpy is 0xb
RuntimeError: module compiled against API version 0xc but this version of numpy is 0xb
Traceback (most recent call last):
  File "/app/wrapper.py", line 8, in <module>
    from biaflows .helpers import prepare_data, BiaflowsJob, upload_data, upload_metrics, get_discipline
  File "/usr/local/lib/python3.6/site-packages/biaflows/helpers/__init__.py", line 3, in <module>
    from .metric_upload import upload_metrics
  File "/usr/local/lib/python3.6/site-packages/biaflows/helpers/metric_upload.py", line 11, in <module>
    from biaflows.metrics import computemetrics_batch
  File "/usr/local/lib/python3.6/site-packages/biaflows/metrics/__init__.py", line 3, in <module>
    from .compute_metrics import computemetrics, computemetrics_batch
  File "/usr/local/lib/python3.6/site-packages/biaflows/metrics/compute_metrics.py", line 45, in <module>
    from .skl2obj import *
  File "/usr/local/lib/python3.6/site-packages/biaflows/metrics/skl2obj.py", line 2, in <module>
    from skan import csr
  File "/usr/local/lib/python3.6/site-packages/skan/__init__.py", line 1, in <module>
    from .csr import skeleton_to_csgraph, branch_statistics, summarize, Skeleton
  File "/usr/local/lib/python3.6/site-packages/skan/csr.py", line 6, in <module>
    import numba
  File "/usr/local/lib/python3.6/site-packages/numba/__init__.py", line 34, in <module>
    from numba.core.decorators import (cfunc, generated_jit, jit, njit, stencil,
  File "/usr/local/lib/python3.6/site-packages/numba/core/decorators.py", line 12, in <module>
    from numba.stencils.stencil import stencil
  File "/usr/local/lib/python3.6/site-packages/numba/stencils/stencil.py", line 11, in <module>
    from numba.core import types, typing, utils, ir, config, ir_utils, registry
  File "/usr/local/lib/python3.6/site-packages/numba/core/ir_utils.py", line 16, in <module>
    from numba.core.extending import _Intrinsic
  File "/usr/local/lib/python3.6/site-packages/numba/core/extending.py", line 17, in <module>
    from numba.core.pythonapi import box, unbox, reflect, NativeValue  # noqa: F401
  File "/usr/local/lib/python3.6/site-packages/numba/core/pythonapi.py", line 10, in <module>
    from numba import _helperlib
ImportError: numpy.core.multiarray failed to import

What should be modified?

Many thanks in advance
Best regards,
Marion

Hello @MarionLouveaux ,

It is an issue with numpy library version installed by the Dockerfile. Please remove the following line in your Dockerfile: RUN pip install numpy==1.13.0. Numpy more recent version is already installed previously in the procedure. We previously did this because of the specific tiff format generated by Icy that required us to use a specific imagecodecs version. Maybe things have changed in Icy and it should work without the imagecodecs ?

Anyway, then I tried to run your modified docker image locally and it raises new errors related to the fixed folder you mentioned in your Icy Protocol. This local folder will not exist in the Docker image and should not be stated explicitely in the Protocol. That’s why we previously used one of Icy plugin (“plugins.adufour.blocks.tools.input.Folder”, see our original Dockerfile in https://github.com/Neubias-WG5/W_SpotDetection-Icy). Maybe there is a better way to do this now within newer Icy version ?

Please let us know your thoughts so maybe integration of Icy workflows into Biaflows can be improved !

1 Like

@MarionLouveaux

To speed-up debugging, you can build and run the docker image locally (to avoid Dockerhub latency). Download your latest code
version from your Github repository, then build and run it locally, as explained in our documentation: Building the workflow image, running it in a local container and debugging

E.g.:

sudo docker build -t icyseg .

Then:

sudo docker run -it icyseg --host https://biaflows-sandbox.neubias.org --public_key XXXXXXXXX --private_key YYYYYYYYYY --software_id 1397370 --project_id 205081 --radius 3

(You can get public XXXXXXXXX and private YYYYYYYYYY keys in the Biaflows sandbox server (Menu up right: Account); I used software_id and project_id related to your existing workflow on Biaflows and Nuclei-segmentation Problem)

Once it works locally, you can then update your Github repository, trigger a new release, and it should come online in Biaflows.

Keep us udpated !

1 Like

Dear Raphael @maree,

Thank you for your inputs!!

Indeed, it’s a good idea to test it locally.
I’ll do some tests this week and keep you updated, starting with the modification of the Dockerfile and checking the way I should specify input folders. I will also have to check if the tiff format Icy outputs still requires a specific version of imagecodecs (lots of things have changed between version 1.9 and 2.1 of Icy).

Best regards,
Marion

1 Like

Dear Raphael,

I managed to fix all issues! :tada: Many thanks again for your inputs!

  • I modified the Dockerfile (removed the numpy installation command).
  • I added IDs to each variable (radius, input and output folder) in the Icy protocol (as for the headless mode in Protocols: http://icy.bioimageanalysis.org/plugin/protocols/).
    ID_Icyprotocol
  • I also modified the way I save results in the protocol to save output images with the same name as the input images.

I still need to check if a specific imagecodecs version is needed and modify the Dockerfile to get the latest version of Icy.

I built and run my Docker locally before sending a new release on Dockerhub.
:thinking: I thought that I could push all my changes on GitHub and that, as long as I wasn’t making a new release, it would not trigger a new workflow on Biaflows sandbox. But I see that each of my commit triggered a new workflow run on Biaflows sandbox.



Moreover, the new release (v0.0.4) created after these 4 commits does not appear on Biaflows sandbox. Is that normal?

Best regards,
Marion

Hi Marion,

I just made the SpotDetection-Icy work.

I could removed the imagecodecs completely. It was needed because python via tifffile could not read the images written by icy. However that seems to work now without the imagecodecs.

Please check that you configured dockerhub to only create an image for a new tag and not each time you push something to github.

Best,
Volker

1 Like

Hi Marion,

I think these 0.0.3 runs correspond to the local tests (using docker build/run) you made on your own computer. They all have the same version number (0.0.3), so these are not new release-images but only new execution runs of the previous version. These “local” tests you did are not fully local: they connect to the biaflows server, retrieve image locally, run the process locally, then send back results to the biaflows server, that’s why you see these runs.

Regarding version 0.0.4, do you see the automatic build on Dockerhub ? If so please double check names (in descriptor,…) match. Maybe this comes from the name field in the descriptor where you should not have “W_” (W_ML-i2k-IcyNucleiSeg) but “ML-i2k-IcyNucleiSeg”. The “W_” is automatically added as a prefix so in your case the system looks for “W_W_ML-i2k-IcyNucleiSeg” which does not exist. Let us know if it still does not work.

1 Like

Dear Raphael, Dear Volker,

Thank you for your feedbacks!

I did the following modifications (compared to previous version):

  • remove the imagecodecs in Dockerfile
  • change Icy download link in Dockerfile. Instead of the Zenodo upload, now a bit too old (version 1.9), one can use the download links from the Icy website (at the bottom of the Download page, in the section “Get previous releases of Icy”)
  • change “name”: “W_ML-i2k-IcyNucleiSeg” by “name”: “ML-i2k-IcyNucleiSeg” in descriptor.json
  • modify the readme.md

I built and run the container locally and it worked :+1:
I didn’t pay attention to the fact that these tests were not fully local. Now I understand why I can see these local tests on the workflows runs of the nuclei segmentation problem on the biaflows.sandbox.

After testing locally, I pushed my changes on GitHub and made a new release (v0.0.5), which triggered the build of the container on DockerHub.

After this step, I still have the issue that I cannot see the version v0.0.5 on biaflows.sandbox.
Here is the snapshot of the configuration panel:

What should I modify or revert (it worked for v0.0.2 and v0.0.3)?

Best regards,
Marion

Hello Marion,

Sorry this time that was an issue on our side. [ML-i2k-IcyNucleiSeg (v0.0.5)] is on the sandbox server now (I enabled it in the NUCLEI-SEGMENTATION project). I ran your project but it produces errors:

Initializing...
Error while initialize image cache:
java.lang.IllegalArgumentException: Illegal value 0 for maxBytesLocalDisk: has to be larger than 0
OpenJDK Runtime Environment 1.8.0_272-8u272-b10-0+deb9u1-b10 (64 bit)
Running on Linux 4.15.0-88-generic (amd64)
Number of processors : 8
System total memory : 65.7 GB
System available memory : 30.9 GB
Max java memory : 14.6 GB
Image cache initialized (reserved memory = 5707 MB, disk cache location = /tmp)
Headless mode.
java.lang.UnsatisfiedLinkError: /icy/lib/unix64/vtk/libvtkRenderingCoreJava.so: libjawt.so: cannot open shared object file: No such file or directory
Cannot load VTK library...
Icy Version 2.1.0.0 started !
No space left on device

We’ll try to reproduce this error as soon as possible. That might come from the fact Icy is trying to install/update plugin-ins when executed. If I remember correctly we had a chat with Stephane two years ago to disable this and he provided a specific Icy version that does not perform this to be sure we keep the version specified in the Dockerfile. Any clue from your/his side ?

1 Like

Hi Raphael,

Thank you!!

I’ll ask @Stephane about this.
I am wondering if the new feature “disable network” in headless mode for the version 2.1 could help fix this:
Disable network when Icy app start: use the parameter --nonetwork or -nnt when launching Icy from a terminal or command prompt: java icy.jar --nonetwork”

Hi @maree, @MarionLouveaux,

About your last error, it looks like the Icy cache engine couldn’t initialize properly (maybe rights issue).
That is not a big deal as honestly i don’t think you should use the cache for headless usage (better to allocate enough memory to handle big dataset) and you can disable Icy cache by using -nc or --nocache parameter when launching from command line :slight_smile:

Hope that helps !

Best,

– Stephane