Standard workflow and format to handle huge slidescanner images

Dear All,

I wanted to start a discussion on best practices for dealing with (large) slide scanner images.

We started scanning at high-resolution, which results in RGB images that are typically larger than 50.000 x 50.000 pixels.

At current I export (in our case from NIS-elements) huge standard RGB TIFFs (that won`t open easily in most software, since the single plane is larger than 2 to 31 pixels) and the fact that there is no resolution pyramid.

These files I then open in python as numpy array (with tifffile) and thought to resave as:

Uncompressed binned Tiffs (4x4, 8x8, 16x16), one RGB file per resolution.

The same as pyramid in a HDF5 file.
(…and maybe the file split up in overlapping (by 100 at each border) 1000x1000 patches)

Do you have any suggestion on what to keep in mind to ensure ideal cross-platform compatibility and performance?

The output files are primarily:
a) to hand over to our user to inspect them and make figures
b) as input for manual or ML-based analysis

I do not care much about the preservation of metadata at this stage (and if so preferably in a separate file).

Thanks a lot & Kind regards


@Pete @Christian_Tischer @constantinpape

1 Like

Personally, I would save them in BigDataViewer compatible h5 format (which is multi-resolution).
Then they can be nicely viewed in Fiji and you can open the h5 files using python etc as well

Dear Tischi,

Thanks a lot!

My options to export from NIS-Elements are quite limited.
Thus, I likely have to have the step in Python.

BigDataViewer is amazing for 3D data, but mine is 2D-RGB with a white background.
I am not sure: Has been used for slidescanner datasets??
I have more like an “google maps” like problem.

Kind regards


This is exactly the point of BigDataViewer!
One main feature is that it works with resolution pyramids, in 3D and 2D.

We can provide you with python code to write Bdv files if you wish; once @constantinpape is back from his holidays :wink:

Personally, I never understood why BigDataViewer is not used for “SlideScanner” data.

But if you need more infrastructure than just viewing, probably QuPath @Pete is the way to go.

And if you do try QuPath, I would recommend trying the original format first and, hopefully, bypassing the export process altogether.

That would be amazing!

But the nd2 files can not be opend by Qupath.
The error reads:
Qupath encountered a problem…
The problem is likely related to:

I think primarily since dedicated tools for 2D exist.
I am for example currently exploring OpenSeadragon.

I will chat with Constantin once he is back.
But I am afraid that his code uses dependencies that might not work with some large array dimensions.


I use ND2 files in QuPath pretty reguarly. However, if they are big stitched images I first use BFtools to make them pyramidal OME-tifs first.


Thanks a lot!!
Bioformats supports some nd2 file formats, but not the one from our slidescanner.

ButI might still export a huge ome tiff using NIS-Elements and then make it pyramidal using python or BF Tools.

What pyramid do you exactly make??
A resolution pyramid?
Or do you split the image into tiles?
If yes of which size?

Ah I see. Well QuPath uses Bio-Formats so that explains why it won’t open your ND2 as well.

I make a resolution pyramid because QuPath can handle them so nicely compared to most options where you have to load in the whole stitched image at full res.

The command I used would be something like below for a 50K by 50K image:

bfconvert -tilex 512 -tiley 512 -noflat -pyramid-resolutions 4 -pyramid-scale 4 “D:\bigImage.nd2” “D:\bigImage.ome.tiff”

To be honest I can’t remember how much I initially played around with the number of levels and scaling and the tile size. I think I remember @petebankhead telling me to try downsampling by a factor of 4 until the lowest resolution is around 500-1000 pixels wide or high. The tiling is needed as BFtools can’t load the full stitched image.

1 Like

I think you’ll want a pyramidal OME-TIFF, which I believe makes both a QuPath and Orbit an option (does/could BDV support this directly too?).

However in general QuPath doesn’t really need Bio-Formats to support the format, it just needs a reader that implements its ImageServer interface. There are implementations for ImageJ, OpenSlide & recently also OMERO. If you can read your ND2 files somehow then you may be able to add QuPath support with your own reader. This could involve extract steps like dynamically stitching fields of view if it really had to.

Which version of bfconvert do you use?
In version 5.6.0 -noflat -pyramid-resolution and -pyramidscale are not known.

if i run:
bfconvert -tilex 512 -tiley 512 E:\export3\test002_1.tif E:\export3\bigImage.ome.tiff

I get an array too large error:

TiffDelegateReader initializing E:\export3\test002_1.tif
Reading IFDs
Populating metadata
Checking comment style
Populating OME metadata
[Tagged Image File Format] -> E:\export3\bigImage.ome.tiff [OME-TIFF]
Switching to BigTIFF (by file size)
Exception in thread “main” java.lang.IllegalArgumentException: Array size too large: 57856 x 64512
at loci.common.DataTools.safeMultiply32(
at loci.formats.DelegateReader.getOptimalTileHeight(DelegateReader.256)
at loci.formats.ImageReader.getOptimalTileHeight(

I think it should be on the same version as BioFormats in general, so you might want to try 6.2.0
More usefully, this link has a link to what I think is the current download.
It was a bit of a maze to find, since a google search for bfconvert does come up with 5.7.1 links first.

Thanks a lot!
Now at least all arguments are known.
If I start with an 4x4 binned input the entire pipeline works and I can import in QuPath.

However with the full size image i still get the same:
Array size too large: 57856 x 64512 error.

My system has plenty of memory left, but it might still be an java.heap space issue.


The array size too large message is in bftools or later in QuPath? If in bftools, it sounds like maybe your tiling isn’t working since it is still trying to load the whole image? Or I might be missing something, I haven’t tried this on a large image or anything :slight_smile:

Sorry for the confusion.

The error is in bftools. That is why I added @s.besson to the discussion.

I was hoping that bftools can convert a huge (non-tiled) TIFF file into a pyramid OME.TIFF.
Can it?

If no:
I have the entire file as numpy array in python.
Obviously i can write it easily as tiles (one tile per standard tif file) on disc.
But that I can not import in QuPath, or? (would be great too)

And I would not know how to instruct bftools to properly interpret my files as to convert them into a ome.tiff.
Is that easy?

As last resort, I might write an pseudo ome.tiff from Python:

similar as described in:

My hope was just that such an solution would already exist.


Are you setting the max heap size to be allocated using set BF_MAX_MEM? The default is 512m so def needed otherwise there will be memory problems.

In QuPath, that would usually generate Java heap space errors, not Array size too large errors. *My QuPath experience may or may not be applicable here :slight_smile:

@Tobias Are you using the exact line @lmurphy listed above?

And @lmurphy did you use that command on single huge tiffs or was your comment on tiling above regarding some formatting on the command line?

I don’t know about importing individual tiles into QuPath. I don’t think that is something that would work well right now.

Yes, I do. The only difference is that I start from standard TIFF not nd2.

I did: set BF_MAX_MEM=26g

I suspect that this might be the problem that causes the error.

tifffile can write pyramidal TIFF files with a variety of options. For example, the uncompressed BigTIFF file produced by the following code works for me with QuPath-0.2.0-m3 and Bio-Formats but not with OpenSlide on Windows:

import tifffile
import cv2  # OpenCV for fast resizing

image = tifffile.imread('CMU-1.tiff', key=0)
h, w, s = image.shape

with tifffile.TiffWriter('pyramid.tif', bigtiff=True) as tif:
    level = 0
    while True:
            software='Glencoe/Faas pyramid',
            tile=(256, 256),
            resolution=(1000/2**level, 1000/2**level, 'CENTIMETER'),
            # compress=1,  # low level deflate
            # compress=('jpeg', 95),  # requires imagecodecs
            # subfiletype=1 if level else 0,
        if max(w, h) < 256:
        level += 1
        w //= 2
        h //= 2
        image = cv2.resize(image, dsize=(w, h), interpolation=cv2.INTER_LINEAR)