OME NGFF: combine multiple images into one "dataset"

Hi All,

cc @joshmoore @constantinpape @Kimberly_Meechan

I am starting this thread to discuss how we could combine multiple OME NGFF into one image “dataset”. My personal use case would be how to represent a MoBIE dataset (GitHub - mobie/mobie: MultiModal Big Image Data Sharing and Exploration) in terms of OME NGFF.

My current definition of an “image dataset” would be: a set of images that can be meaningfully displayed in the same physical coordinate system, combined with sufficient metadata to know where in this common coordinate system they should be positioned. I guess that’s already the first point to be discussed whether this is a definition that would be useful for other people.

Another question would be how to technically “create” such a dataset.

Along those lines:
@NicoKiaru for inspiration, could you post here the text of a bdv.xml that combines several images into one dataset? I think you are now a known expert for this topic :wink:

4 Likes

Just to link the similar discussion that focusses on what viewers to use as well as “how to create the dataset” mapping transform from one image to the other. Goolge-maps type browser - #5 by Christian_Tischer

1 Like

My initial thoughts:

That makes a lot of sense (at least to me :slight_smile:). But I maybe wouldn’t call it image dataset. Reason: dataset is often used to refer to the object that contains nd data, for example the scale datasets s0, s1, …, in the case of ome.zarr. Maybe multi-image-container or image-collection?

To support multiple images, the dictionary multiscales in the attributes could (1) be nested further or (2) instead of multiscales one could allow for multiple keys. So instead of

{
  "multiscales": {
    "name": "some-image",
    ...
  }

(1):

{
  "multiscales": {
    "some-image1": {...},
    "some-image2": {...}
  }
}

or (2):

{
  "some-image1": {...},
  "some-image2": {...}
}

The problem with both suggestions though is that they are not backward compatible; but I can’t think of a way that is right now.

2 Likes

@Christian_Tischer

This definitions sounds meaningful to me. And it is more or less exactly the philosophy we follow with CZI + ZEN Connect.
We call this concept sample-centric data storage. I am not sure if that helps your discussions… :wink:

2 Likes

I did not invent anything but I just reused the specifications used in the xml files of bigdataviewer. AFAIK, the original use case is positioning of lot of “sources” (XYZT) in a single global 3d coordinates system (typical use case : light sheet, but that’s also good for correlative or multimodal imaging). BigStitcher by @StephanPreibisch uses these metadata in order to perform several rounds of finer and finer registration steps.

So the info you have in a bdv xml file are:

  • for each source (XYZ)
    • for each timepoint (T)
      • a chain of affine transform in 3D

Keeping the chain in memory allows to keep track and easily cancel, if needed, a failed registration step. The first affine tranform usually contains the voxel size information, the second affine transform can contain the position in 3D, or how to unskew data if needed.

Maybe looking at what’s done in other software could be a nice source of inspiration :

SVG is a language based on XML for describing two-dimensional vector and mixed vector/raster graphics. 

It’s 2D, but maybe it can be extended to 3D ?

1 Like

That looks very useful to me as it, e.g. would allow to apply (and document) both channel and drift corrections, without having to re-save the voxel data.

However, could one go further? Let’s say I have a huge, e.g. volume EM, XYZ volume and would like to apply different transformations to the individual XY slices (very common use-case afaik, isn’t it @schorb?). @NicoKiaru @bogovicj , do you know if that could be represented with the current bdv.xml specifications or does it only allow for one transformation per volume?

3 Likes

Then, as far as I know, you need to consider your slices as independent sources.

I might be possible to do a custom ‘source’ which allows for flexibility in 2d transform while keeping a simple ‘z stacking’, but that needs to be outside the bdv xml specifications.

Outside bdv, trackem2 does that - registration of big 2d planes:

1 Like

On not using “datasets”, I’d concur since it shows up in various places already. I’ve been using “containers” colloquially with the only one currently implemented for high-content screening plates. But “collection” or anything suitably unused would work.

This would certainly work as things currently stand, but I’m beginning to think I made some mistakes in the v0.1 spec. For example, we’re starting to look into storing multiple different downsamplings (e.g. one for 2D and one for 3D access). Currently the only place we could put that information is also in the multiscales object. i.e. we likely need more, which means we’re probably into breaking the spec for a v0.1 anyway. My thinking would be:

  • rename “multiscales” to be “image” (or similar) making it clear that it turns a group (or HDF5 dataset ← that word again!) into an image. In that case, to support multiple images as @Christian_Tischer is rightly requesting here would mean doing so via a containing group:
- images     # group with new "collection" or "container" metadata)
  - image1   # group with "multiscales" now called "image"
  - image2   # the same 

Note: you could do this today to represent multiple images in a single OME-Zarr fileset, but you would either need to explicitly pass the URL to your clients or have the client search the hierarchy since there is no metadata to say, "please find images at path image1/ and image2/.

Agreed. But I think this is going to happen several times as we work through all these concepts as a community, so I think we need to be ready for upgrades :100:

:+1: which leads to the multiple “Scences” concept, right? And would you ever have more than one sample within a give fileset, @sebi06? i.e. would that be yet another level? For me, delineating some of those semantics helps split up “series” vs “scences” vs “sources” vs “positions”. Cf. What is the recommended/best way to open OME-TIFFs recorded with MicroManger 1 and 2 from Python 3? - #18 by joshmoore

@NicoKiaru brought up this idea recently on call. I find it intriguing but knowing what I do about the SVG spec, I’d tend to put this on the “will take time to implement” end of the spectrum. For what it’s worth, I do think that there need to be multiple collection/container-implementations. If we can describe the most immediately needed one, then we can build from there. Ditto on:

All the best,
~Josh

4 Likes

My vote would be for “collection”.

> - images     # group with new "collection" or "container" metadata)
>  - image1   # group with "multiscales" now called "image"
>  - image2   # the same 

I like this!

Maybe we could have this discussion on specifying the transforms that map the pixels into the physical world in a different thread. However, it could be related to the image collection discussion, because is it (a) a property of each individual image where it lives in the physical world or (b) a property that is (only) defined on the level of the image collection? I think I remember several people making the point that (b) is better as the same image could potentially be “reused” in different spatial contexts, is that right?

If we think along the lines of (b) we would need to be able for a certain image (the raw data) to be part of multiple image-collections. Does that mean that we should allow for several images groups? Probably yes, isn’t it? I am sorry for my elaborate argument as I am starting to feel that this was clear anyway?! :wink:

2 Likes

Few more comments:

  1. In order to efficiently browse through an image set I think we would need an “image feature table”. Again, I am not sure this is something that should be defined on the image-collection level or scraped together from the anyway existing image metadata.
  2. @joshmoore, is your vision (a) that the specifications for HTM data would in future versions of the file format be “absorbed” (:arrow_backward: not sure what’s the right word here) by the image-collection data model or (b) that they would remain as stand-alone specs? I think I would favour (a).

I need to go back and review https://github.com/zarr-developers/zarr-specs/issues/50 but I believe there were even suggestions of having both.

You mean “collections”? Yes. But then one must ask how one finds the collections.

What do you see being in this “image feature table”?

Currently we have a “plate” as its own construct. I would think (a) so that we have a hierarchy of constructs:

  • BaseCollection
    • Bag": basically the simplest collection with no extra metadata
    • Plate: metadata for a 2D grid of images (plus Fields, etc.)
    • Sample: multiple non-grid images (scenes?)

It might be that the evolution of these types would eventually bring us to @nicokiaru’s SVG collection.
~J.

1 Like

Currently I am thinking mainly biological and experimental metadata, such as treatment, and, e.g., well_name in case it is HTM data.

Maybe like this?

- collections
  - my_favorite_images
    - image1
    - image10
  - all_the_big_images
    - image4
    - image5
  - ...

Understood. And going back to your previous comment, I also am not sure at which level to do it. If there’s either not a lot of metadata or not a lot of images, encoding it at the image layer is fairly straight-forward. But when you start to have GB (or TB!) of tabular data along with a study, I assume that needs to be its own data source.

Certainly an option. Alternatively (or perhaps additionally) the top level group could have metadata of the form:

{
    "my_favorite_images": {
        "@type": "collection"
        "images": ["image1", "image10"]
    },
    "all_the_big_images": {
        "@type": "collection"
        "images": ["image4", "image5"]
    }

}
1 Like

It seems to me that as the structure becomes more complicated and the collections larger, efficient data extraction will require some form of indexing.

Another line of thought that could be explored is to use RO-Crate or something similar: Research Object Crate (RO-Crate) · RO-Crate is a community effort to establish a lightweight approach to pa...

Definitely agree, Jean Karim. Hopefully that can be an extension to the specs as they are built rather than needing a full reworking. Certainly having bidirectional metadata will help in many situations. At some scales, though, the metadata will like become more like data and need to be stored in binary at which point we may have to deal with both in the same application.

I like RO-Crate a lot, but it hasn’t become clear to me yet how to merge that with the separate Zarr hierarchy. I’ve tended to try to keep things simpler starting out rather than introducing another technology/library. I failed to meet up with Stian at the biohackathon, but I imagine that a few conversations could show us a better path forward.

~J.

1 Like

Just to link another use-case: saw this example using napari

possibly using napari-czifile2 · PyPI