Goolge-maps type browser

Hi Christian,
just wonder whether you may know a service or a code that can be used to generated something like this:
https://www.petapixelproject.com/
I need to have an overview with some areas that have way higher magnification.

Best regards,
Ilya

1 Like

I think the OME.NGFF file format together with the vizarr viewer would be a great open-source solution for this! I cc @joshmoore to guide you there.

1 Like

Hi @Ilya_Belevich,

Indeed, for just very large images OME-Zarr + vizarr (and additionally if 2D, OME-TIFF + OMERO) would provide deep zooming, but it sounds like the feature you’re looking for is a higher-resolution only at certain locations?

~J.

Thank you for suggestions!

it sounds like the feature you’re looking for is a higher-resolution only at certain locations

yes, we have overview images (2D) + high-res snapshots of several areas. The idea is to provide users possibility to start with the overview image and allowing them to zoom into those high-res areas.
In this sense it is slightly different from just having a viewer of a large-field image in pyramidal format.
Would OME-Zarr + vizarr still be an option?

I think I am having the same use case. I had low res 2D large EM image and then high-res tomograms at different regions.
Personally, I would save all the data as different images in BigDataViewer format and then use the affine transform in the bdv.xml to position them correctly in 3D space. If you open all of them simultaneously in BDV the visualisation that you want will be automatic.

Here are few screenshots from my use case.

Input data subset

Everything together in BDV

Zoom in on one region (interactive, using mouse wheel)

4 Likes

…I would recommend using GitHub - bigdataviewer/bigdataviewer-playground to do this as it offers a user interface for:

  1. adding multiple image sources to BigDataViewer
  2. changing the blending mode such that you don’t have the issue that you see in above screenshots where the EM images add up to saturate the LUT

cc @NicoKiaru and @schorb

4 Likes

That’s pretty much what I want! Can you deploy it for others your project to others?

I just bumped into https://openlayers.org/ they have several examples that looks very close:

  1. Here they change from one map layer to another upon reaching a certain zoom level
    Layer Min/Max Resolution

  2. Limited Layer Extent

  3. Static Image

  4. Zoomify

There are several installation packages required but I just finished a small demo example and I was able to make a deployable code.
Now the idea is to generate the images in zoomify format and see whether I can somehow connect the high-res views to it…

What do you mean? Host the images in the cloud?

Otherwise, of course you can deploy, it is just basic Fiji without any installation (or maximally one Update Site to be added).

like accessing the dataset via url with a browser

Yes, also this is possible, then you have to save the data in n5 or ome.zarr format into an object store.

This is what we are doing here: GitHub - mobie/mobie: MultiModal Big Image Data Sharing and Exploration
But there is not yet an easy way for others to use our framework (we are working on it…). Main issue also is that people would need an object store to save their data in. Do you have one?

I do not think that we have anything like that. That is why I thought that taking g-maps approach may be easier but now I not totally sure :]

And where would you save the data in this case? Google Drive?!

Following up on the Javascript side of this as opposed to the working Java version that Christian is talking about, you can see an example of multiple modalities in vitessece (a sibling of vizarr) here: Vitessce

What I don’t know how to do is to have the overview/macro image disappear as the user zooms in. That would likely need to be new code.

:wink: Any reason we can’t move this to the public forum?

As Christian points out, you’ll need storage somewhere. We might should start there with what you have in terms of infrastructure for making the data available remotely to see which of the options will be easiest to achieve.

Found out yesterday that the library that’s being used at least on the Python side can read Zarr from Google Drive! (Not sure about the performance though)

~Josh

I was in fact the whole time thinking this is on the public forum.
The reason it is here is that Ilya started this as a private thread.
I agree, a bit of a shame as this could be interesting for many folks.

1 Like

Do you know if the spatial relations are defined in a similar way as in BDV? Would this global transformation be defined directly in the OME-Zarr format header? Or does it require additional metadata?

:+1: Converting it to a forum post then if there are no objections.

Not yet. I just was sent that example this morning. I’ll CC in others once this is public and we can continue the discussion.

~J.

We have local server that runs all kind of webpages, I thought it may be the easiest solution. But after few tests yesterday, it may be missing some required libraries. As I do not know details of things behind open layers it is rather hard for me to draw exact conclusions as it just may be a problem with my code. For open layer there were also some examples of images taken from Flickr. Google Drive may be a good alternative to local copy.

Any reason we can’t move this to the public forum?

as with many things, it suppose to be a short question as I was expecting that there an existing server somewhere that can be used for that. For me it is fine to open the thread.

see an example of multiple modalities in vitessece (a sibling of vizarr) here: Vitessce. What I don’t know how to do is to have the overview/macro image disappear as the user zooms in. That would likely need to be new code.

At least it is possible to turn off a dataset…
They mention: " directly from OME-TIFF files and Bio-Formats-compatible Zarr stores"
Do both of these options support bounding box info? Can I have several datasets with own bounding boxes within Zarr store?
In this case the avivator service looks as ready solution, which requires just a link to the dataset: http://avivator.gehlenborglab.org/