Building a large pyramidal representation

Hello all,

I submitted a similar topic a couple of weeks ago - on how to build a single pyramidal representation of several small images, all of which belonged originally to the same microscope slide. The reason why I want to do this is that I would like to be able to generate now my own “microscope slide format” (any of the BioFormats format would do) with a mask or something similar.

So, right now, my main concern is the following - if I want to do this without any intermediate files (a single pipeline that would go from an initial point with an initial slide image, process it with whatever methodology is necessary, to the final point with the processed and final slide image), what is the best way to do this, from the “nitty-gritty” aspect (i.e. the “lower” level approach - “high level” approach-answers are also appreciated)? If writing these intermediate files is necessary that would be fine, but I would still appreciate to know how to then go from them to the large pyramidal representation.

I am working mostly in Python, but I would appreciate answers to this topic on both a more general and language-specific level.

I now this is a bit of a complicated question (at least I am struggling to find an answer), but hopefully someone out there will have some info on it :slight_smile:


EDIT: to make it as clear as possible: if there is no tool available for this, any form of guidelines on how to do it are greatly appreciated

This isn’t (yet) an answer to your question, yet nevertheless I have been working on something relevant within QuPath…

Basically, QuPath is designed to handle pyramidal images. It does this by defining an ImageServer interface, which is basically the thing through which pixels are requested (based on a resolution level and bounding box) along with metadata.

Under the hood, the ImageServer might be grabbing its pixels from a single image, but it could also be doing more sophisticated things… e.g. spatial or color transforms, or even combinations of them by wrapping ImageServers within ImageServers. The relevant implementation here would be a SparseImageServer, which can be built from a collection of image tiles and the bounding boxes of where they should be, creating a (pseudo) large image dynamically.

I’ve recently been exploring how to convert any ImageServer into an OME-TIFF pyramid and handling the awkward problems that can occur whenever tiles are missing, or weirdly arranged, or need to be written in parallel for acceptable performance etc. This hasn’t yet been packaged up in a usable form, but it will be in the next QuPath milestone release. For now, the pyramid-writing code is here.

One of the biggest challenges has been trying to figure out a sensible way of representing ImageServers-that-aren’t-simple-images. In the end I’ve gone with JSON, so potentially it may be possible to do all that you want in Python and pass a JSON representation of your tile layout (including file paths) through the QuPath library files and pick up the pyramidal OME-TIFF on the other side.

1 Like

Thank you! I am stoked at how complicated finding something like this has been. Building the JSON representation had occurred to me, but I was looking for something that used it. If QuPath does, then it’s wonderful. I will take a closer look at this.

There is something that is not very obvious from what I am reading and from your answer, however: if I were to build this JSON representation, how can I build the pyramidal perspective, especially using the SparseImageServer? In the end, basically, what I still do not fully get is how to assign, within the JSON file all the different image paths or, in other words, how to structure the JSON file that QuPath requires. Is there any info on this?