Improve Bio-Formats Image Position Metadata

Hi @joshmoore @dgault @ctrueden ,

Applications where the relative location of one image to other images matter are becoming more abundant. I think this is partly due to more and more “intelligent microscopy” approaches where people take large low-res overview images and then (automatically) several smaller high-res images inside of the low-res images (this is done both in LM as well as in EM). I think another reason is that we realise that viewers like BigDataViewer in fact allow us to put all those images on top of each other and see the high-res images in the context of the low-res image, which is awesome. In addition, any kind of tiled image data that needs to stitched is another source of data where relative locations matter.

I was thus wondering whether it would be something of general interest to improve bio-formats in terms of the metadata that specifies the relative location of images?

For example, @NicoKiaru spotted the following issues with the current implementations:

  • is the position specified by bioformats the center or corner ?
  • sometimes the stage/sample can be flipped in XY, or in Z
  • sometimes BioFormats does not know the unit.

I don’t know, but maybe, if enough people are interested one could try to write an application to get some funding for someone to work on this?

I cc some people that could be interested in this.

@VolkerH @swg08 @schorb @NicoKiaru @Alex_H


I would be very interested in this. It is very important when it comes to whole slide images, but uncertainty regarding how to interpret the position information makes me hesitant to integrate automatic repositioning into QuPath.


Related to this, what is Reference Frame Unit (public static final ome.units.unit.Unit<Length> REFERENCEFRAME)? Is it the unit chosen when BioFormats do not know what what the physical unit in use ? This shows up quite often.


I would also be very interested in this. We have many microscopes (Leica, PerkinElmer, Zeiss) that store metadata information which we usually have to pick up hackily to make sure it works.

So improved support would be most welcome!


I am also quite interested in this topic. In addition to multi-resolution processing and visualization, as @Christian_Tischer, this is important for image registration.


@Christian_Tischer thanks for starting this conversation. Obviously, we agree on the value of metadata. Our experience is that such improvements are best achieved when addressing the needs of a well-defined project. From the open-source side, many recent format improvements have been directly related to IDR submission for instance.

The stitching workflow is a very concrete situation where reliable and interoperable positional information would benefit to the wider bioimaging community.

We would definitely welcome a project that focuses on clarifying the definition of such metadata to be unambiguous and unifying the behavior across a range of typical formats. The involvement from our side could probably range from providing letters of support to getting more directly involved.

As you mentioned, trying to get this effort appropriately funded is probably a first step. Do you have already thoughts of where such an application could be submitted?


Recently, Micro-Manager started to store a 2D affine transform describing the relation between stage movement and camera. This allowed us to overlay images collected with different camera, that were oriented differently and had different pixel sizes. I believe that any solution will need that, but also does not need much more ( 3D affine transform may be needed for certain work-flows). Agreeing on ways to store affine transforms should be a whole lot easier than agreeing on how to store pixels plus metadata. At least, there are fewer numbers to fight about;)


This would work for me, as I am very happy with the BigDataViewer file format, which has an 3D affine transform already built in. @s.besson would you agree as well?

1 Like

Completely agreed with both examples, affine transforms should capture most of the information required to express spatial relationship between images/acquisitions, notably translation, rotation, scalin & flipping.
The ability to represent these relationships via a matrix means many formats both open and proprietary make use of them. As an additional advantage, they are not only applicable to pixels but also to any spatial information like regions of interest.

1 Like

Hi all,
Great that this topic is back on the front burner. We had started a discussion along this line a few years back ( and looks like we’re all in violent agreement :slight_smile:
Might this be integrated into the work to update the OME model that @joshmoore and @Caterina have been discussing in Proposal for 4D Nucleome Microscopy Metadata Guidelines based on the OME data model

1 Like

Maybe it is not appropriate to discuss such things openly, but if anyone here would like to spell out ideas how to get some funding for such an activity: feel free to mention it (also in a private message if you prefer).

1 Like

There’s both a data as well as a metadata component, but ultimately future formats should have a common format for representing this, as @nicost suggests.

I wouldn’t think so. is an ideal place to detail the (dire) needs of the community and how to overcome them!


1 Like