It´s related about .ndpi file extension?

Hello everyone,
I have a .ndpi image of around 4 and 5 GB. Each image has 9 level_count (from 1.0 to 256.0) with 3 channel © each and 11 slices per channel (Z-stack).
So, it means: 1 count or series has 3 channel with 11 (Z) per slice = total 33 images.
I can visualize all this information using ImageJ or Fiji software. I can also load the image in python using the openslide-python library, using it I can see, dimensions, level_dimnesions, and etc.
My questions:

  1. My aim is to extract each count or series with their respective each Channel and each Z slices. So in my case for 1 series —3 channel ---- 11 slices per channel (33). (I want to extract 33 images per each 9 count or series). How can I achieve it using openslide library in python?
  2. I can see © channel and (Z-stack) slices notation for these images in an ImageJ software, but I couldn´t find these Z-stack or C-channel notation in python while using openslide library. So how to solve this problem?
  3. And lastly, how can I convert the openside properties in an XML format?

I have devoted lots of my time to solve this but I didn´t succeed. So any help would be appreciated.
Thank you.

It looks like OpenSlide ignores focal planes in NDPI:

1 Like

QuPath should support .ndpi z-stacks, assuming they can be read by Bio-Formats:

There are then different ways to export regions as needed:

Thanks for your quick reply but I was planning to do the whole thing in python environment.
Is there any other library or package in python which can handle these kinds of things?

1 Like

Hello there,
So is there any way to solve this problem or is there any other package which i can use in python to solve this problem?

1 Like

You can use paquo as a Python library that interacts with QuPath:

I’m not aware of any Python-friendly library for working with .ndpi files apart from OpenSlide, which as @cgohlke points out doesn’t handle z-stacks (nor does it handle most non-RGB images).

I’ve only seen a few .ndpi stacks, and Bio-Formats could open them – but that’s already Java.

Since .ndpi is TIFF-related (albeit not a very normal TIFF), if you really want to do things in Python then you might need to work with TIFF at a lower level. I think @cgohlke is the expert on that :slight_smile:

Thanks for your comments. I will go through your suggestion and let you know, whether my problem is solved or not. I need to see what are the attributes I will get after I load the .ndpi file using Paquo with QuPath interface. Hope it works :slightly_smiling_face:

You could try tifffile together with the imagecodecs and zarr packages:

$ python -m pip install tifffile imagecodecs zarr

Tifffile can give you detailed information about NDPI files. Whether tifffile can uncompress the JPEG compressed image data depends on the size of the images. If the compressed size is larger than ~2 GB, imagecodecs cannot decompress the JPEG stream. If you are not using Windows, imagecodecs cannot decompress JPEG images with widths or lengths >65535. These are limitations of the libjpeg/libjpeg-turbo libraries used by imagecodecs.


import tifffile
import zarr

filename = 'filename.ndpi'

# print detailed information about the NDPI file
with tifffile.TiffFile(filename) as tif:
    for page in tif.pages:
        print(' ', page)
        for tag in page.tags:
            print('  ', tag)
        print('   NDPI_TAGS =', page.ndpi_tags)

# separate image resolutions, slices, channels to uncompressed TIFF files
with tifffile.imread(filename, aszarr=True) as store:
    group =, mode='r')
    assert isinstance(group, zarr.Group)
    for r in group.keys():
        stack = group[r]
        assert stack.ndim == 4  # ZYXC
        for z in range(stack.shape[0]):
            zslice = stack[z]
            for c in range(stack.shape[-1]):
                print('.', end='')
                image = zslice[..., c]
                tifffile.imwrite(f'_r{r}_z{z}_c{c}.tif', image)
            del zslice
1 Like

Hello there, Good morning,
I tried your code. But as you said it won´t help me if the image size is more than 2GB. In my case, the input files size is more than >4gb. I got an error whether I pass input of 189 MB or 4GB, but errors are different.

Errors: When I pass the 4GB file.
Traceback (most recent call last):
** File “/Users/yubraj/PycharmProjects/VIdeo_frame_extraction/ESR4-Codes/”, line 29, in **
** with tifffile.imread(input_any1) as store:**
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/site-packages/tifffile/”, line 721, in imread**
** return tif.asarray(kwargs)
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/site-packages/tifffile/”, line 2809, in asarray**
** result = stack_pages(pages, out=out, maxworkers=maxworkers)**
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/site-packages/tifffile/”, line 13033, in stack_pages**
** for _ in, pages, range(npages)):**
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/concurrent/futures/”, line 598, in result_iterator**
** yield fs.pop().result()**
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/concurrent/futures/”, line 435, in result**
** return self.__get_result()**
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/concurrent/futures/”, line 384, in __get_result**
** raise self._exception**
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/concurrent/futures/”, line 57, in run**
** result = self.fn(self.args, self.kwargs)
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/site-packages/tifffile/”, line 13024, in func
** kwargs)
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/site-packages/tifffile/”, line 5648, in asarray**
** func=func, lock=lock, maxworkers=maxworkers, sort=True**
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/site-packages/tifffile/”, line 5512, in segments**
** yield decode(segment)**
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/site-packages/tifffile/”, line 5501, in decode**
** result = keyframe.decode(args, decodeargs)
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/site-packages/tifffile/”, line 5406, in decode
** shape=shape[1:3]**
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/site-packages/imagecodecs/”, line 807, in jpeg_decode**
** raise exc**
** File “/Users/yubraj/.conda/envs/Temp1/lib/python3.7/site-packages/imagecodecs/”, line 795, in jpeg_decode**
** outcolorspace=outcolorspace, shape=shape, out=out)**
** File “imagecodecs/_jpeg8.pyx”, line 318, in imagecodecs._jpeg8.jpeg8_decode**
imagecodecs._jpeg8.Jpeg8Error: Empty JPEG image (DNL not supported)

Process finished with exit code 1

Error: When I pass 189 MB input image.
Traceback (most recent call last):
** File “/Applications/PyCharm”, line 1448, in _exec**
** pydev_imports.execfile(file, globals, locals) # execute the script**
** File “/Applications/PyCharm”, line 18, in execfile**
** exec(compile(contents+"\n", file, ‘exec’), glob, loc)**
** File “/Users/yubraj/PycharmProjects/VIdeo_frame_extraction/ESR4-Codes/”, line 29, in **
** with tifffile.imread(input_any2) as store:**
AttributeError: enter

Python 3.7, Pycharm

And one thing, I don´t think there´s an aszarr command in a tifffile.imread function

Still looking for help.

Tifffile is out of date. Install the latest version.

I doubt that the size of a JPEG compressed stream in your 5 GB file is more than 2 GB. If you have the chance, try the imagecodecs wheels from PyPI on Windows, which includes a patched version of libjpeg-turbo for decoding images with widths or heights >65535.

Hi there,
I tried your suggestion in window´s OS.
But now i suffered from a memory error i guess.

Line No: 29: zslice = stack[z] # Has a problem.

Is there any way to ignore this error?

You can try to add a del zslice statement after the for c in range loop. Otherwise you need more RAM, at least 32 GB.

Looks like the ndpisplit command line tool is able to split NDPI files into separate files for each level and slice. It rewrites the MCUs of the ginormous JPEG encoded strips to smaller TIFF tiles.

Hi there,
zslice = stack[z] won´t let me to another step. It through´s me an error related to memory.

You mean to say this right;
zslice = stack[z]
** for c in range(stack.shape[-1]):**
** del zslice**
but this code can´t run further line 29.

Yeah, I run this command yesterday and I received output as my choice in a Tif format (But I didn´t understand why output size was small (421.37 MB per slice) even though image Dimension was 186496 × 36608) Did I do something wrong or it just did what you have said just a minute ago.
And, I was looking for a python based splitter like your code. Although I found one python code who has used ndpiTools but it didn´t work. I think there´s something missing on that

I passed this command:
ndpisplit /Users/yubraj/Desktop/ESR4-important-folders/Dataset/Third-Sample-dataset/1/01.ndpi

That looks correct if 11 slices are about 4-5 GB. The output TIFF files are still using JPEG compression but are manageable by normal TIFF readers.

Use Python’s subprocess module.

Hello there,
I tried your code with higher ram (with 64 GB) and I finally able to extract z-stack images from the .ndpi file.
Now, while converting these extracted tiff files into DICOM standard I am getting an error on pixel data bcoz Pixel Data is too large for an uncompressed transfer syntax.
So now I have to compress these extracted tiff files using lossless jpeg method.
I tried pillow package to compress these files, but I am getting an error that PIL can read these types of file.

Traceback (most recent call last):
** File “/snap/pycharm-community/214/plugins/python-ce/helpers/pydev/”, line 1448, in _exec**
** pydev_imports.execfile(file, globals, locals) # execute the script**
** File “/snap/pycharm-community/214/plugins/python-ce/helpers/pydev/_pydev_imps/”, line 18, in execfile**
** exec(compile(contents+"\n", file, ‘exec’), glob, loc)**
** File “/home/yuvi/PycharmProjects/task-1/VIdeo_frame_extraction/NDPI-ext/”, line 85, in **
** img =**
** File “/home/yuvi/anaconda3/envs/task-1/lib/python3.7/site-packages/PIL/”, line 2944, in open**
** “cannot identify image file %r” % (filename if filename else fp)**
PIL.UnidentifiedImageError: cannot identify image file '/home/yuvi/Downloads/DATASET/NDPI-TO-TIFF/Z0-C0.tif’

Process finished with exit code 1

I want to compress this tiff file with lossless jpeg compression and wants to save in tif.
So how am I going to achieve that?

I am using the same code and get the following error:

My requirements are a little bit different from the original question. I only want to extract 0 level tiff and slice it into small JPEGs.
P.S. There are no Zs in the ndpi images. 5 series, and 3 planes in each series.

TypeError                                 Traceback (most recent call last)
<ipython-input-26-a69deff2e455> in <module>
      7         #assert stack.ndim == 4  # ZYXC
      8         for z in range(stack.shape[0]):
----> 9             zslice = stack[z]
     10             for c in range(stack.shape[-1]):
     11                 print('.', end='')

/usr/local/lib/python3.6/dist-packages/zarr/ in __getitem__(self, selection)
    659         fields, selection = pop_fields(selection)
--> 660         return self.get_basic_selection(selection, fields=fields)
    662     def get_basic_selection(self, selection=Ellipsis, out=None, fields=None):

/usr/local/lib/python3.6/dist-packages/zarr/ in get_basic_selection(self, selection, out, fields)
    784         else:
    785             return self._get_basic_selection_nd(selection=selection, out=out,
--> 786                                                 fields=fields)
    788     def _get_basic_selection_zd(self, selection, out=None, fields=None):

/usr/local/lib/python3.6/dist-packages/zarr/ in _get_basic_selection_nd(self, selection, out, fields)
    826         indexer = BasicIndexer(selection, self)
--> 828         return self._get_selection(indexer=indexer, out=out, fields=fields)
    830     def get_orthogonal_selection(self, selection, out=None, fields=None):

/usr/local/lib/python3.6/dist-packages/zarr/ in _get_selection(self, indexer, out, fields)
   1117                 # load chunk selection into output array
   1118                 self._chunk_getitem(chunk_coords, chunk_selection, out, out_selection,
-> 1119                                     drop_axes=indexer.drop_axes, fields=fields)
   1120         else:
   1121             # allow storage to get multiple items at once

/usr/local/lib/python3.6/dist-packages/zarr/ in _chunk_getitem(self, chunk_coords, chunk_selection, out, out_selection, drop_axes, fields)
   1786         try:
   1787             # obtain compressed data for chunk
-> 1788             cdata = self.chunk_store[ckey]
   1790         except KeyError:

/usr/local/lib/python3.6/dist-packages/tifffile-2021.3.31-py3.6.egg/tifffile/ in __getitem__(self, key)
   8146         if key in self._store:
   8147             return self._store[key]
-> 8148         return self._getitem(key)
   8150     def _getitem(self, key):

/usr/local/lib/python3.6/dist-packages/tifffile-2021.3.31-py3.6.egg/tifffile/ in _getitem(self, key)
   8540             decodeargs['jpegheader'] = keyframe.jpegheader
-> 8542         chunk = keyframe.decode(chunk, chunkindex, **decodeargs)[0]
   8543         if self._transform is not None:
   8544             chunk = self._transform(chunk)

/usr/local/lib/python3.6/dist-packages/tifffile-2021.3.31-py3.6.egg/tifffile/ in decode(data, segmentindex, jpegtables, jpegheader, _fullsize, bitspersample, colorspace, outcolorspace)
   6005                     colorspace=colorspace,
   6006                     outcolorspace=outcolorspace,
-> 6007                     shape=shape[1:3],
   6008                 )
   6009                 data = reshape(data, index, shape)

TypeError: jpeg_decode() got an unexpected keyword argument 'header'

You’ll need to upgrade to Python >= 3.7, tifffile-2021.3.31 and imagecodecs-2021.3.31 to use the new NDPI decoder. If you can’t upgrade your system Python, try Anaconda or a Docker image.