Neuroglancer vs OMERO.FPBioimage

PFBioimage looks really great. I love that there is a VR component!

Has anyone looked at bringing Neuroglancer to OMERO? It is pretty smooth and combines 2D and 3D.

Live demo (must use Chrome):!{"dimensions":{"x":[8e-9%2C"m"]%2C"y":[8e-9%2C"m"]%2C"z":[8e-9%2C"m"]}%2C"position":[2914.500732421875%2C3088.243408203125%2C4045]%2C"crossSectionScale":3.762185354999915%2C"projectionOrientation":[0.31435418128967285%2C0.8142172694206238%2C0.4843378961086273%2C-0.06040274351835251]%2C"projectionScale":4593.980956070107%2C"layers":[{"type":"image"%2C"source":"precomputed://gs://neuroglancer-public-data/flyem_fib-25/image"%2C"name":"image"}%2C{"type":"segmentation"%2C"source":"precomputed://gs://neuroglancer-public-data/flyem_fib-25/ground_truth"%2C"segments":["158571"%2C"21894"%2C"22060"%2C"24436"%2C"2515"]%2C"name":"ground-truth"}]%2C"showSlices":false%2C"layout":"4panel"}


Thanks for the suggestion.
I don’t know that anyone has looked at using Neuroglancer with OMERO.
But Neuroglancer looks very nice and it would be great to have it use data from OMERO.

I am sure that there are many different ways that the integration could be performed.
For example, I see that Neuroglancer supports zarr as a data source and the OME team is actively working on support for zarr - see
It just takes time for someone to get familiar with both projects to understand the best way of doing it.


1 Like

I have been a heavy Neuroglancer user across a variety imaging modalities. It’s great for remote access to giant data (notably, all of the EM examples, like the one @dahurt linked to). Personally, one of the most useful features has been the ability to share a link to a specific view, including camera position, layer settings, and segment selection. There are also a number of programmatic interfaces as well, allowing some fun interactions, e.g., interactive proof reading.

In any case, given the context of OMERO, I have been toying with using Neuroglancer to interact with OMERO data, specifically, data converted into Zarr using bioformats2raw (or omero-cli-zarr).

It works, through the use of the translation matrix to map the dimensions to neuroglancer’s. Unfortunately, channels must be in a single chunk if you want them passed to a single shader, so I’ve had rechunk the Zarr data such that the chunk size spans all channels. Fortunately, with the magic of Zarr, the files remain compatible with all other tools. :slight_smile:


Very cool, @perlman. Do you see the channel chunking issue as something that needs addressing in the spec or in implementation? ~Josh

This is almost certainly an implementation issue within Neuroglancer.

While I think chunks should typically include all channels on RGBA images, I can come up with examples where that is less than ideal, such as an analysis workflow focusing on a subset of channels, or channels being imaged over multiple sessions.

I love the decoupling of the storage (Zarr) with the metadata (NGFF/OME-ZARR/name du jour) for this reason. The format remains compatible as I tweak under the hood.

1 Like

Here’s a neuroglancer demo of data hosted from the IDR.

Nothing too fancy, but I think it shows yet another benefit of using common formats. :slight_smile:

(Cross posted from the thread on IDR Zarr access).

1 Like