What is the best solution to work with large images? I mean not large file sizes, but large pixel numbers in a single plane like whole slide images with 60,000 x 60,000 pixels or more. In our case they are saved as Zeiss CZI files, but cannot be opened in Fiji. There seems to be a limit for the maximum number of pixels at 2^31 which is roughly 46,000 x 46,000. The image size for 3 channels is only 22GB, which should not be a problem on our current hardware. But as I understand the software should never need to load the full image into the memory? Is there a solution which can load the CZI format and handle these datasets (except from Zen which is quite good at it)?
Yes, view and analyse. Mostly histology images which need white balance adjustment, color deconvolution, area and cell count measurements as well as cell orientation. But also fluorescence images with cell segmentation and overlay of both.
Thanks for the link to the previous thread, I was hoping that there is already a ready made solution, software or plugin, but I will also try this.
Bigdataviewer looks great, but how do I convert the file into XML/HDF5 format? The problem is that the export options in the Zen software are all limited to lower resolution images, most likely due to the 2GB limit of the formats? Is there a CZI to HDF5 converter? The simple option to load the file into Fiji and send it to the bigdataviewer obviously does not work…
If you download the latest milestone release of QuPth then don’t install the extension or Bio-Formats separately - they are already included. You will still need the runtime libraries if you don’t already have them.
Thanks, got it to work now. I installed the latest milestone and deleted the previously installed extensions from the extensions directory. With runtime you mean the Visual Studio 2015 C++ Redistributable? I could not install this as I already head a newer version installe, most probably because I installed Zen on the computer.
Does QuPath have a white balance option?
Still have to try the big file though… but many thanks so far.
Glad it works, yes the redistributable isn’t always needed.
Not sure what exactly you want with the white balance correction, but you can set brightness/contrast accordingly or set stain vectors (and background) for color deconvolution. There’s lots of documentation, I’d suggest https://github.com/qupath/qupath/wiki/Getting-started
The large 46,000 x 46,000 pixel file opened without problems, zooming in and panning fast and reliable as well as changing display adjustment. Looking forward to exploring the analysis functions. A great piece of software! Thanks for developing it.
For some reason the automatic white balance in the microscope software (Zen) does a bad job on our system, so that the images have a green tint. Instead of manually adjusting, we use a Fiji macro to measure RGB intensities for unstained background and then adjusting the image. Zen has a similar build in function, but it uses a rather small area. I thought it would be a standard function for digital histology.
The best way to help would be to spread the word and find a second person to help work on this software! As far as I know, it has been a one man show so far, and off and on at that. It’s a bit rough to compare this to either paid software with an entire company supporting it, or even open source software with a team of programmers. Many of the functions are still under construction, including, I believe, the background subtraction that shows up when you adjust your color vectors (recommended for any brightfield image). In general, QuPath is intended for analysis, not image editing, though that might be possible in the future. And for quantitative analysis, starting with the best images is key!
On that note, does your system have a neutral density filter? I ended up removing that from our Observer7.
That said, if you want similar functionality, you can certainly code it into a script. Not writing out an entirely new image (yet), but smaller, processable areas, can be sent to ImageJ to be analyzed, and the results sent back to QuPath to be added into the hierarchy of the whole slide image. So if you want to perform background adjustments and cell detection on the adjusted image, and send the cells back to QuPath, you absolutely can.
In general, I don’t know of a standard way of doing very much in digital histology, which makes creating software like QuPath quite different from developing software associated with any one particular scanner. It’s not possible to make many assumptions about the source of the image and (generally) there are no specifications to aid with the interpretation of the image metadata for the formats.
For example, I believe that ImageScope that optionally apply an embedded ICC Profile from an .svs file and the Hamamastu viewer can apply a gamma transform appropriate for .ndpi files. But QuPath is designed to be more generic, and can be used with files from all kinds of formats for which different kinds of processing might be more appropriate - and which may already have had transforms applied to the the pixels (e.g. to sRGB).
As far as possible, I’d like to avoid getting embroiled in many format-specific steps - especially since formats can evolve over time (e.g. images acquired by a different model of scanner).
There’s also then the question of whether such color transforms should be for display only, or if they should impact analysis.
So all in all, it all gets rather complex rather quickly Some of my recent changes to QuPath are to enable support to dynamically transforming images, e.g. for normalization, but the actual code and algorithms to apply this still need to be written. I hope a lot more will be added to the software over the next few years - directly and via scripts and extensions - and the current functionality is really only the starting point…
We have indeed a Zeiss cell observer and it has a filter in the illumination path which is necessary when using UV excitation to avoid LED phosphorescence (this is what the Zeiss rep said). After manually moving the filter out, the white balance in Zen works like a charm, so no further adjustments needed.
@petebankhead QuPath works like a charm on these large files, thanks for creating it
@bogovicj As I could not get the file directly to load into BigData Viewer, I tried BigStitcher, on other large files I was able to create a h5 file, which then loads fine in BigData Viewer. However in this case I get a java.lang.NegativeArraySize exception.
@emartini Yes I think Imaris, Arrivis, Aivia or the software from VisioPharm should work, but one would have to test in each case if this specific problem of having a single large image exceeding the 2GB limit is supported. I also would like something which is more flexible for custom applications. In the past we used Definiens, but could not afford the maintenance payments.
But generally every software/plugin which makes good use of a pyramidal file structure and avoids loading the whole file into one array should work.
Regarding Aivia: the biggest single time point 3D data set we have worked with is about 2.1 TB. Data sets with multiple time points can be much bigger. Also, we actively (co)develop and support custom applications - it is a standard part of the packages we offer core facilities (CF go, neuro and pro). In terms of maintenance cost, Aivia includes at least 3 years of upgrades and support (in some cases 5 years). The total cost, even with the 5 years, is still much more attractive than the alternatives (which typically include a single year of upgrades and support).
In terms of open source and free, Vaa3D and SciView are probably the best bets at present.