A standard for extensible execution of ML models on images?

I have been playing with DeepImageJ, a tool for making image-centric TensorFlow models accessible for execution by end users from within the ImageJ user interface. This has led me to a question for the community:

Is there a metadata standard for documenting ML models which makes it more feasible to execute them on images in a general way?

To elaborate:

Since these TensorFlow models are graph-based, there is a lot of flexibility in the structure of the inputs and outputs. This makes it hard to, in general, “just execute the model” on an image, without some prior knowledge of the particular model to be executed, its requirements, assumptions and structure. For example, many models only operate on images with particular dimensions.

By documenting these requirements and assumptions in a metainformation file to the model itself—e.g., how to treat each input and output of the graph, and constraints on what sorts of images are intended for use with the model—it becomes easier for general-purpose tools (like DeepImageJ) to make these models executable from user-friendly tools (like ImageJ or napari).

DeepImageJ has an implicit XML schema documenting some of this metainformation. In particular, needed input image dimensions, supported tile sizes, and supported tile overlaps are specified—but also some social metadata such as who authored the model, and where to look for more information about it.

Here is an example XML metadata file for StarDist executing via DeepImageJ
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<Model>
    <ModelInformation>
        <Name>Stardist nuclei detection</Name>
        <Author>Martin Weigert</Author>
        <URL>http://csbdeep.bioimagecomputing.com/index.html</URL>
        <Credit>Max Planck Institute of Molecular Cell Biology and Genetics, and Center for Systems Biology Dresden, Germany</Credit>
        <Version>n/a</Version>
        <Date>2018</Date>
        <Reference>Uwe Schmidt, Martin Weigert, Coleman Broaddus and Gene Myers, Cell Detection with Star-Convex Polygons, Medical Image Computing and Computer Assisted Intervention (MICCAI) 2018</Reference>
    </ModelInformation>
    <ModelTest>
        <InputSize>320x256</InputSize>
        <OutputSize>320x256</OutputSize>
        <MemoryPeak>578.1 Mb</MemoryPeak>
        <Runtime>  2.4 s</Runtime>
        <PixelSize>1.00pixel x 1.00pixel</PixelSize>
    </ModelTest>
    <ModelCharacteristics>
        <ModelTag>tf.saved_model.tag_constants.SERVING</ModelTag>
        <SignatureDefinition>tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY</SignatureDefinition>
        <InputTensorDimensions>,-1,-1,-1,1,</InputTensorDimensions>
        <NumberOfInputs>1</NumberOfInputs>
        <InputNames0>input</InputNames0>
        <InputOrganization0>NHWC</InputOrganization0>
        <NumberOfOutputs>1</NumberOfOutputs>
        <OutputNames0>output</OutputNames0>
        <OutputOrganization0>NHWC</OutputOrganization0>
        <Channels>1</Channels>
        <FixedPatch>false</FixedPatch>
        <MinimumSize>64</MinimumSize>
        <PatchSize>128</PatchSize>
        <FixedPadding>true</FixedPadding>
        <Padding>22</Padding>
        <PreprocessingFile>preprocessing.txt</PreprocessingFile>
        <PostprocessingFile>postprocessing.txt</PostprocessingFile>
        <slices>1</slices>
    </ModelCharacteristics>
</Model>

It strikes me that this sort of definition is hardly unique to DeepImageJ, and could be utilized effectively in any sort of general-purpose model execution framework for images. Does anyone active in this community know whether work is being done to define a community standard along these lines? @agoodman @jni @bcimini @AnneCarpenter @fjug @mweigert @uschmidt83 @iarganda @joshmoore

P.S. There are often also further restrictions regarding which sorts of data are valid for input given the scope of a pretrained model’s training data, such as only images from a particular microscope, within certain intensity ranges, etc. Not all of these restrictions can be easily documented in a technical way. While I am interested in how to deal with that thorny issue, in this topic I am merely wondering whether there is any standard whatsoever for those elements which can be documented.

6 Likes

Hi Curts! Anna Kreshuk’s lab @akreshuk , Emma Lundberg’s lab, people around Anne Carpenter @AnneCarpenter , and my own lab have started to work on a pretty general solution for a model zoo, covering model execution, and even (re-)training.

We will certainly present what we have during our December Hackathon at CSBD…
Here some more info: https://github.com/bioimage-io
Not sure how much the infos you can find there make sense without some additional explanation… but be ensured… more to come!

Credit: All people involved are part of the GitHub orga I linked to!

Best,
Florian

4 Likes

In terms of the exchange of a model itself, @ctrueden, the only substantial fanfare that I’ve heard of in recent years is:

I haven’t played with it though, so I don’t know if it achieves a more feasible execution of them. You’d think that would be the ultimate goal, of course. ~J

2 Likes

IIRC, not every operation has an onnx equivalent (since some frameworks lack certain operations), but otherwise you just import the onnx model into compatible runtimes.

1 Like

Hey Curtis!

I don’t know of any specific metadata standard like that. I guess the closest standard in the community is what @joshmoore already pointed out, ONNX, which seems to be more oriented towards the model representation rather than towards its application. In that sense, what I feel is missing in DeepImageJ’s XML file is the pixel/voxel size and the image normalization procedure. The latter is optionally included in their pre-processing macro. In my experience, the models are extremely sensitive to these two factors.

I guess this kind of issues will be addressed in the bio-image model zoo that @fjug was talking about? Unfortunately I will miss the hackathon, but I will be definitely interested on joining the conversation (maybe by Skype or hangouts?) if possible :wink:

4 Likes

Yes ONNX is mostly for interop, e.g. training in Caffe and running in TensorFlow. You can probably probe the ONNX model to determine input dimensions, but pixel size, normalization, etc are beyond the scope of the model representation.

1 Like

Hi Curtis and others,

I think coming up with a truly general metadata standard is going to be very difficult. We might have a chance to get it right if we limit the scope in an appropriate way for this community. Ideally, assigning a version to a metadata spec will enable to update it in the future.

I have been drafting a metadata spec in the context of CARE with @frauzufall over a year ago (not public). Our goal back then was to specify the metadata that we export as json that should be parsed in Fiji when running a network. However, we never really implemented this (as far as I remember/know).

I was just taking a look at this and noticed that it’s not even (fully) correct. That really shows that we need good metadata descriptions…

Interesting. Would have been nice to know about this.

FYI, none of them are publicly visible.

Cheers,
Uwe

2 Likes

Hi Uwe,

I joined the bioimage.io discussion quite late because I had another workshop in parallel. Our document came to my mind and I went through it again, but their specification already included everything we wrote down a year ago and way more. Would be great to work on or discuss this in December, maybe modularize / wrap CSBDeep so it can execute a bioimage model, maybe with stardist and proper postprocessing… No concrete plans.

Best, Debo

2 Likes

More than happy to include you in any of this, really! I hope you know that you are always invited to interact with everybody in my lab and beyond. Your expertise and your ability to deliver quality work almost independent of what you touch is every project’s dream!

With respect to information flow: I discuss things daily with everybody during stand-up, weekly on Friday, and if I expect it to be relevant, push info in person. I invite you to join us more often (I’m, for example, not aware of a single thing you’re currently up to… also not that you worked with Debo on anything similar some time ago, until she mentioned it a few days back).

Looking forward seeing you more often,
Florian

2 Likes

Hi all,

as some of you know we at ZEISS currently work on two interesting topics which are relevant for this discussion:

  • The APEER processing platform, which allows to “daisy-chain” modules, which are nothing else than docker containers + a module specification
    • workflow a managed by Kubernetes
    • are well suited to run ML workflows
    • can be executed in the cloud or locally
  • Intellesis Machine Learning
    • Pixel Classification, Confidence Thresholds, Conditional Random Fields, Spectral Segmentation
    • Segmentation via U-Nets (starting with ZEN 3.1)
    • AppManager Infrastructure to create specific services
    • built-in Tiling Client for multi-dimensional datasets (not limited to CZI)
    • completely built upon TensorFlow, Dask, sklearn etc.

Both of those things are constantly developed and therefor we rely also on feedback from the community.
The next big topic with be the option ti allow “everybody” to use their own trained models inside our framework for segmentation etc.

Therefore we already proposed a first “standard”, what somebody need to do in order to import a model into ZEN to use it for segmentation an inside the image analysis framework. We soon will more “Services” for Instance Segmentation, Classification and Image-2-Image processing based on needs and feedback.

The 1st draft of our specification for models for segmentation can be found here: ANN Model Specification

As a next step we will releas a PyPi package that allows to convert a PB.file or a trained model (H5) into a *.czmodel, which can be used in our software.

If you prefer not to use ZEN (for whatever reason) it is always possible to use those models on the APEER platform as well.

I would be really happy to get your feedback.

Sebi

1 Like