Thanks to everyone who joined the @ngff calls yesterday. It was great as ever to chat with everyone. Recordings are available under Index of /presentations/2021/community-call-2021-02-23 for anyone who missed out.
Beyond catching up, the meeting had a primary of getting more people involved in the writing of the specifications. A number of issues were either created or commented on after (or in some cases during!) the meeting:
If any are of interest, feel free to get jump into those conversations. Reports will be made back to image.sc periodically as things solidify. The intent is to keep the roadmap organized so you can have an idea of which specifications are coming when.
Several other topics had takers and those issues should be appear soon:
- Collections of images (and other things) (Update: #31)
- Flexible naming and ordering of dimensions including dimensions like “samples”
- Vector-based annotations (to go along with the current pixel-based annotations, or “labels”)
- and tables of features to go along with the vector- and pixel-based annotations
There was also interest in other specs like remote links but no clear takers yet. So if you’re looking for something to do…
Finally, a few other topics were touched on that likely won’t translate directly or immediately into issues, but we can all keep them in mind as we move forward:
- Process: @jkh1 and others pointed out that in order to involve an ever wider circle of organizations, there will need to be more formalization of the process for getting involved. (For the moment, if you are interested, #ome-ngff is the tag to follow and joining @ngff will get you all the most relevant updates)
- Minimal, or intermediate, specification: @DragaDoncila and @jni pointed out that some of the specifications are more general purpose and make the current well-suited for non-microscopy data. It is likely worth keeping the specification “approachable” for outsiders with the most general purpose specifications coming first to engage with as many communities as possible.
- External files: Eventually we will need to draw a line between what can be specified as a part of the NGFF specs and what should really be handled externally. This may involve integrating with other specifications like research objects. This will likely also be part of the solution of allowing complete exports of OMERO data.
- Too many files: The topic of too many files was discussed again both from the sysadmin and the vendor perspectives. It will be critical to keep give users tools to choose the proper balance between file-size and filesystem latency.
If I’ve forgotten anything, feel free to shout (and/or start a new thread). Expect the planning for the next meeting to kick off in a month or so, but if you start feeling antsy, say the word.
All the best,
live notes for the interested
Session 1 Live Notes
JKH: would be good to have a process to involve vendor.
- JKH: Have some kind of RFC for the format to help convince vendors
- J.F.: discuss with vendors when puchasing
labels, polygons, meshes (10:32)
- CP: still not sure how happy they are with one big container. want to ship EM volume only when updated. Josh: remote
- CP: openorganelle, neubias tutorial Other specifications?
- JF: Discussion with Pete Bankhead. Transformations need supporting in depth. Namespace.
- DS: vector-based labelling? pixel-based could be sparse or densed.
- MS: cryo: position, orientation, some metadata. Even vector specification of each atom.
- KH: point-cloud? In some sense, yes.
- GG: looked into polygons & meshes. Difficult to have both in the same definition. In meshes people are only expecting to see vertices and edges. But point with URL to PDB entry would work in polygons. For meshes: PLY could cover most cases (esp. since it’s extensible).
- JKH: point as minimal polygon. Martin alluding to having a vector associated with each point.
- KH: PLY can handle arbitrary dimensions, don’t need edges. Can store chemical, kinetic information.
- GG: but can’t store holes in PLY.
- EP: interested in multiscale meshes and fragments. getting to billions of faces. (Neuroglancer has a very custom format)
- CP: saalfeld repositories also have a custom-implementation
- EP: defining the scales, then saying how they are chunked in n-dim space.
- JKH: using linking to other representations, e.g. as table
- GG: meshio author suggested embedding in zarr doesn’t make sense. More precisely, he said we should not create a new specification.
- JF: Pete suggested GeoJSON. Fairly practical. Several implementations. (INVOLVEMENT/VB)
- EP: would like to see useful tools for extracting byte-values from the zarr arrays. Polygons-to-bitmasks
- GG: (INVOLVEMENT/VB)
- KH: (INVOLVEMENT/VB) State of Java? Josh: ZarrReader. Mobie reads from BDV. Needs refactoring.
- DS: involvement with Scifio. Lots of issues that need fixing.
- CP: (INVOLVEMENT/PB)
- KH/DS: especially transformations. DS: overlapping labels.
- JNI: cf. collections - good things in Draga’s format. Good to open individual images with labels, or the whole collection. HCS too heavy weight
- JM: worth getting in touch with Tischi (attending the afternoon question)
- MS: includes labels and meshes? like what CP said, modularized. collection of entities?
- JNI: nice feature of new spec is to open individual things as if they are the whole thing
- CP: link?
- DDP: arose out of a need to open a bag of images for training purposes, but maintain clear correspondence to the original set (for future editing). don’t want to just always have a stack. want a hierarchy to open the stack or individual images. should definitely support more than just images. entities, point-layers, etc. all make sense.
- WM: essentially what we did with HCS
- JM: balance of how much metadata to group together all images, e.g., to allow quicker clients.
- JKH: design of database and how to relate the various items. relational tables that relate indices. But zarr is needing to deal with that without the database. will need indices.
- KH: trying to understand …
- JM: two bidirectional levels.
- JKH: like nested collections
- WM: didn’t nest arbitrarily since it’s hard to give an overview.
minimal (i.e. non-microsopy)
- Juan: 5D hardcoded requirement? Still plans to remove this requirement? Yes. (Can discuss more if needed)
- Juan: related: can spec be super lightweight bottom-up, then elaborate? ie minimal required metadata
- DPP: nice to abstract away a lot of the metadata that’s not needed. OME-Zarr is almost perfect for satellite images but just not quite. an intermediate minimal spec is needed.
- JM: tension between zarr and ome-zarr (i.e. don’t want HCS)
- JF: e.g. WSI that is 2D you don’t want to care about the extra stuff
Session 2 Live Notes
- DB: different from collection? Not really. Once we have the
- Dimensions (DB)
- still hard-coded
- see openorganelle for one implementation
- used multiscale representation. transform with each multiscale, list of axes
- CT: each resolution has a transform? Yes. Iteration on
- Principle is: multiresolution is a list of datasets with single-resolution.
- LK: bspline warping field?
- useful since so large that you don’t want to keep writing them
- transform stack
- JAM: something for later?
- JB: in favor of pushing it a little bit. too dependent on implementations (one person could be storing bspline displacements, etc.) only in the case of displacement fields (which are standard across tools) XxYx2 (2D) with a tag.
- see “non linear transforms” link below to ngff issue
- DB: xarray support?
- Samples per Pixel (JS)
- conflate samples/channels or new samples dimension. Also mosaic
- JAM: used “A” for sample since “S” was for scene.
- JS: how to make clear to someone what they’re seeing. input/output.
- DT: you do want the last channel to be 3 for RGB.
- DS: is RGB/RGBA the only example of where this “hack” would be necessary?
- DB: RGBA is display, but microscope could be higher.
- CT: just treat it as 3 channels?
- DB: access pattern is important, e.g. for GPU.
- BK: kind of like column vs row store, depends on what the use case is (computation vs display)
- JS: missing indices.
- Display settings specification? (CT)
- currently under “OMERO”
- needs specification
- Label mask downsampling methods (center pixel, mode) (CT)
- was writing something for the Java implementation but opened pandora’s box
- Janelia: storing frequencies at each location
- DB: mean and mode in a bare bones implementation (only for visualization)
- for lossless you need annotations.
- Java writer
- when available?
- using N5 library or something else?
- bioformats2raw writes ome-zarr. Also a branch with ZarrReader and ZarrWriter implementations.
- Geometry (who added? napari issue?)
- meshes / pts / etc.
- DB: there will eventually be data that won’t be zarr-able.
- TL: truth of napari is that we’re leaning heavily on these conversations. (There’s not yet a model) Community building consensus.
- VH: I have a need for polygons on 2D data (as in link polygons), not sure where meshes came from but can see this making sense for volumetric data.
- JM: need to bridge the multi-file world
- How/where to store image acquisition and other metadata a la OME-XML.
- DS: BINA working with a number of groups to build a set of recommendations for what should be in the metadata. Keep running in the question of what should be in the zarr-world and what should be in the pure metadata side.
- Also: you don’t want metadata to be scattered acrossed a chunked filesystem. You want to be able to interpret it all. Metadata stored with a tile.
- JS: wrestled some with that with CZI. cf. mass spec field. great to have a list of things, but if it’s not tightly defined it will be a nightmare to implement. (incl. catastrophic effects of resolution units)
- RH: how is HCS handling it now?
- KC: would suggest put all the metadata into one thing.
- BK: also an issue with mutability. caching problem. experimented with log structure for the metadata. makes sense in some situations. You get a permanent record of all the writes. (Good for provenance as well, though that’s not for every use case)
- KC: or use a database?
- “remote links”
- relationships between datasets
- Label features (status?)
- Draga implemented for smallish metadata
- Josh: for large numbers likely need to look at a binary format.
- DB: datatype that you use is important. Trying to figure out when to use datatype that reflects the number of labels (1 vs 2^32)
- Saalfeld proposed for every array that represents label is to use the largest datatype and then provide a metadata mapping. Layer of complexity but preserves the full space.
- BK: experiemented a lot of that for label block annotations. For smaller sizes (8^3 minichunks inside large blocks) then the labels aren’t changing very much. Great compression technique. Or is that part of the spec? This happens in neuroglancer label compression and we have enhanced version in DVID label block compression with multiple indirections. Also useful for fast relabeling since you just modify the lookup table and not the underlying data per voxel.
- DS: problem with storing labels as labelled images is that objects can’t overlap. contour maps or run-length would allow overlapping. (Also no problem with maximum encoding in an image)
- LK: use a label matrix as a mapping, where they do overlap you have a list of 2 objects.
- Storage from the sysadmin side (KC)
- File metadata performance
- File count/file count per directory issues
- Object vs File - what is expected?
- Deal with Zarr & N5 that have to be managed. What exactly is expected by the formats on the storage side?)
- Filesystem can’t handle a large number of files (GPFS/Lustre)
- There should be a lower limit on the size of files. Should there be upper limit on count of files?
- Saalfeld’s lab has 845M files in 252 TB.
- Latency hit from that many files.
- BK: nested directories only get you so far. Trade-off between how small the files are… NG move between sharded and unsharded formats. Can potentially then use a single round-trip for several things. i.e. not just for local also blobstores.
- JAM: GitHub - intake/fsspec-reference-maker: Functions to make reference descriptions for ReferenceFileSystem
- BK: problems with in-place writes
- immutable is easier
- OME-Zarr as write format (JS)
- read or write format
- thinking of vendors wanting to slam things to disk, cached into local file structure
- in general, does it make sense that this is an end format but you may use an intermediate
- BK: from fly-EM side, think the primary use case for zarr/n5 was for parallel writes. quickly move data into a clustered context. (note: mutations are different from ingestion. could use a sharded system. parallelize along shards) granularity of your locking is sufficiently small… use databases.
- LK: cameras would never implement OME-Zarr (writing 64^3 chunks)
- KC: unsuited for data dumps off instruments. getting a single stream from an instrument.
- JS: points taken but mass spec vendors didn’t make an effort to implement the open standard to write. Maybe the format next.
- DB: microscopists who buy expensive cameras currently exert zero force. on the PIs & customers.
- JAM: we will try to organize a union (QUAREP-LimI is possibly just that? https://quarep.org/)
- BK: key-value storage coming. cut out POSIX. implement on the storage. Samsung has a team that is talking about this constantly. (KC: no one has implemented) Key Value Storage API Specification | SNIA
(There has been prototype drives but nothing popular: Samsung Announces Standards-Compliant Key-Value SSD Prototype)
- CT: one stream per camera could still be fast. then would need to rechunking for different access patterns.
- BK: seq. versus random writes.