Ha ha thanks, I started working not on meshes but SMLM because collaborators are doing STORM:
It is extremely early, but helps me to get a feeling of zarr, and both problems are similar, feed backs welcome!
One of the mechanisms I feel could be interesting in SMLM (I was discussing this with Roxane Fabre at CIML) would be the ability to select a detected point in the rendered image and link it back to the raw file (to assess quality of the input, see if it’s a false positive, etc). This is not possible today because the tiffs are too big, and it’s a hassle to manually open them.
So (sorry for the detour), as for mesh data, and ROIs, we want to be able to link an object to voxel(s) in the raw data. Fortunately, everybody leaves in the same 5D space, neatly discretized
For that, we can have
'table' groups in the zarr (I still need to look how xarray does that þ), and we can enforce that every such table has at least [x, y, z, c, t] columns.
That way we can access data associated with a region of the image through a simple query, irrespective of the current scale.
For meshes, we can have a
mesh group, formatted as WKT (or PLY), and associated data can be stored in adjacent
table groups, one for each element: vertex, edge, face.
The thing I don’t see how to setup with WKT is the link between the tables and the geometry elements. For ROIS, I assume each ROI would be an entry in the WKT file, so we can refer to the position of the entry, but for meshes it’s less clear.
In PLY it is easy to get a flat index for the elements as their number is declared in an
element line in the header. The problem with PLY is the specification is super vague, so if you do more that 2D it seems difficult to decide how to call your elements (
volume, and thus parsers aren’t happy).
That’s it for my current thoughts, any insight welcome (especialy by the persons who work on ROIs, I’m afraid I don’t remember who that was.)