[NEUBIAS Academy@Home] Webinar "Quantitative Pathology & BioImage Analysis: QuPath" + QUESTIONS & ANSWERS

Hi everyone,

The QuPath webinar for NEUBIAS Academy is now on YouTube.

Approximately 200 questions were asked at the time. Over the past week we’ve been arranging these into theme and answering as many as we can… resulting in this post.

Sometimes we’ve grouped questions and/or reworded them for ease of navigation, but we’ve included the original wording for completeness and clarification. I apologise if we’ve missed your question or haven’t done it justice in our (necessarily short) answer; feel free to start a new topic here if you want to discuss anything further.

Apart from that, I’d like to give a huge thanks to @Ofra_Golani, @lmurphy and @melvingelbard for all their work as moderators and in helping put this together, and @Julien_Colombelli, @oburri and everyone else at NEUBIAS who are making the webinars happen !




Installation & updating

Q1: When will the new version be available on Qupath's webpage?
  • When the new version will be avialable in Qupath webpage ?

You can access the latest version here - https://github.com/qupath/qupath/releases/latest

The next stable release is planned in the next few days… so please download the latest milestones and report any bugs quickly so they can be fixed in time!

Q2: How do I install & update QuPath?
  • If I already run QuPath, how do I update it? Do I need to uninstall the old one and then install the new?

There are installation instructions on the download page and at https://qupath.readthedocs.io/en/latest/docs/intro/installation.html

You can have multiple versions of QuPath installed at the same time, but it is not recommended to mix versions for analysis. Some commands may have changed behavior, and there can be incompatibilities in file formats/projects.

In general, if you edit something in a newer version then the file might not be opened with an older version.

But if you need to import data from an older version to a newer one (e.g. because you want to use annotations you made previously) this is possible. In v0.2.0-m11 there is a new Import images from v0.1.2 command to help with this.

Q3: Are there any differences when QuPath runs under Windows, Mac or Linux?
  • Are there any diferences when QuPath runs under Windows, Mac or Linux?

They should be equivalent. However, QuPath is primarily developed on Windows and Mac. The Linux version has been tested less by the developers.

File formats

Q4: Which image formats are supported by QuPath?
  • All formats can be open by QuPath? (.czi, .tif, …)
  • Does it support all kinds of image files taken on different microscopes? for example .lif file from leica?
  • Does QuPath support Bioformat, similar to ImageJ, so we can import proprietary files , such as .lif (by Leica microscopes)?
  • I have a curiosity (I’m sorry I’m very naive in the field): which image file formats are supported by QuPath? Are there any limitations? Could I, for examples, use files directly exported from an Axioscan acquisition (in the original format)?
  • Is there a list with the supported file formats?

QuPath supports many formats, with the help of Bio-Formats and OpenSlide. You can see a full list of supported formats in the official docs: https://qupath.readthedocs.io/en/latest/docs/intro/formats.html

Q5: If my images are multispectral images, do I still follow the same file import pipeline?
  • If your starting images are a multiplespectral Images, do you still follow the same file import pipeline?

You import images the same way in QuPath. QuPath doesn’t contain any specific spectral unmixing algorithms; depending upon what you need to do, you may have to perform this step elsewhere and import the unmixed images.

Q6: Can QuPath read image metadata from a file?
  • Can metadata about the images be read in from a file?

Yes, on the left of the software there is an Image tab under which you can see the main metadata that has been read (as long as that file format is supported).

Q7: Does drag & drop influence which image reader is used?
  • Is drag & drop causing any issue with metadata recognition? Are images ““Bio-Formats interpreted”” automatically?

When importing an image to a project, an import dialog will automatically appear from which you can optionally specify which library is used to open the image from an ‘Image provider’ drop-down list.

If you leave this as the default, then QuPath will choose what it thinks is the ‘best’ image reading library for the particular file format. For TIFF-based formats, this involves checking the metadata so that it will use ImageJ to read images that were previously written by ImageJ, Bio-Formats for OME-TIFF, and Bio-Formats or OpenSlide for most others.

(Basically, QuPath tries to avoid the issues Fiji has in this regard - but specifying the import library from the drop-down menu is the easiest way to control it).

Q8: Can I combine different scenes (e.g. generated by the Axio Scan (Zeiss)) into one?
  • Can you combine different scenes (which e.g. is generated by the Axio Scan (Zeiss) into one?

Not currently, but it’s something I’ve been thinking about and could be discussed for a future version.

It is not entirely straightforward to solve in a general way - see Improve Bio-Formats Image Position Metadata

Q9: Can QuPath open mrxs files with 4 or more channels?
  • QuPath has a problem to open mrxs. files with 4 chanles, do you know how to solve this problem?

See MRXS files in Qupath

Q10: There is a square pattern on top of the image. Is this an artefact of the slide scanner?
  • There seems to be a square pattern on top of the image, visible at high zoom. I have seen this in other images from slide scanners. Is this an artefact of the slide scanner?

I’m not sure what pattern you’re referring to or which image; there’s an ‘overview’ square on the top right, which you can turn on or off from the toolbar. But it’s also quite possible there are artifacts in the slide as well.

Q11: Do image files need to be saved onto your computer in order for QuPath to access them, or can they remain on a server/cloud-based storage?
  • Do image files need to be saved onto your computer in order for QuPath to access them, or can they remain on a server/cloud-based storage?
  • Do you need to download this in the computer that is connected to the microscope? Or can it be done it offline after we collect the images ?

QuPath can be used offline, as long as the images are stored in an accessible location (e.g. if the images are on a remote server, QuPath will not be able to read them without a connection to the server).

QuPath includes some built-in support for images hosed by OMERO (using its web API; only RGB is supported). There is also a custom extension for the Google Cloud API (created separately).


Q12: Can QuPath be used to analyse Z-stacks?
  • Also can QuPath analyse z-stacks
  • Also, can QuPath analyze 3D images as well?
  • Can QuPath work with image stacks?

QuPath can open z-stacks and time series (using Bio-Formats), and a slider will appear in the top left to navigate them. Any regions of interest that are created have their locations stored, so they related to the correct slice, and you can also apply the pixel classifier.

But the analysis remains primarily 2D; more sophisticated 3D work is likely to require scripting (or other software).

Performance & memory

Q13: Why does QuPath use so much memory when importing TIFFs?
  • Hi I was wondering if you have encountered memory problem while importing tif images into the project? I found it used much memory proportional to the number of images imported and needs to manually clear the memory in QuPath. Is there any solution to it or is it already fixed in the new version? Thank you!
  • Importing multiple tif images into the project at the same time will use a lot of memory. Is there a way not to use memory when importing?

It depends upon the TIFFs… if they are pyramidal, memory use should generally be quite modest.

An issue with importing large, non-pyramidal TIFFs was reported this week Several images are "Image null" when loading in QuPath
It was fixed within 2 hours, and the fix will be in the next release - so if a problem remains, please report it promptly and clearly and we’ll try to sort it out!

Q14: Can I see the live RAM usage of QuPath?
  • Hi, can you see live RAM usage of QuPath?

Yes! View → Show memory monitor

Q15: I sometimes have problems with analysis or export. What are the hardware requirements for QuPath?
  • How can you improve the overall performance of qupath? I have an i7 macbook pro with 16GB ram and sometimes have problems with the analysis
  • Hi, I was wondering what are the preferred recommendations for the hardware on the computer QuPath is working on. I know it says i7 processor and >16 GB RAM, but my computer has that, I made available 12 GB RAM for the programme and I still run into ““java heap space”” errors when using some scripts. Is this still due to the hardware or something else is going wrong? (The script is one to merge results files into 1 results file, in my case annotations results are in the results files). Thanks

Performance problems can often be resolved by approaching things in a different way; for example, the new Export measurements command may be much more efficient than old methods of scripting export. It is a good idea to post details on image.sc for any specific problems.

For a discussion on hardware requirements, see Good pc for QuPath v.0.2.0 M9 and future versions

Q16: How easy is it to run some of the functions in parallel (multi-threaded)?
  • How easy to run some of the functions in parallel, e.g. multi-threaded, so CPU threads are involved in calculations?

Many functions already do run in parallel, but not all. It can depend upon the size of the region being analyzed (e.g. cell detection for a large region will be divided into tiles, and the tiles processed in parallel; but a small region will not be parallelized).

QuPath will take care of this, you don’t need to do anything extra.

Q17: Does QuPath use my graphic card (GPU)?
  • Congrats for QuPath! What about GPU optimization (making use of NVIDIA CUDA, for example), do you have any plans for it?
  • Does it use GPU or CPU only for training and classification ?

If building QuPath with TensorFlow, you can optionally use the GPU version (although it is very fussy about cuDNN…).

For everything else, our current focus is the stability and functionality – and finding efficient ways to do things that don’t require any particular hardware. But we realise that will have limits and we are certainly also thinking about GPUs. However, many bottlenecks depend upon things that cannot be solved by the GPU alone (e.g. reading image tiles, the user interface thread).

The interactive machine learning uses OpenCV as the processing library, which uses the CPU (but highly-optimzed). It is designed so that other machine learning libraries could potentially be used, if suitable extensions are written.

Help & resources

Q18: Why are some commands marked as 'deprecated'?
  • The two available feature extraction methods have been marked as deprecated in m10. What is the future of feature extraction in Qupath? Can we call feature extraction methods from ImageJ in Qupath groovy ?
  • What does it mean that ‘simple tissue detection’ is flagged as ‘deprecated’ in the newer versions (m10/m11)? Is it still usable and supported?

The ‘deprecated’ flag acts as a warning that the days of that command are numbered… it is likely to be removed in a later version.

The reason for this is usually that it a) isn’t considered terribly useful, or b) has been replaced by a better alternative, c) will be replaced by a better alternative soon. Removing old commands helps make QuPath more maintainable, and creates space for new features to be added without the menus becoming excessively clogged up.

If you find a command you particularly need has been marked as deprecated, feel free to ask on the forum why and discuss its future.

Q19: How can I contact you directly?
  • Can you provide us with the contact information if we have query regarding QuPath?
  • Hello moderators. I am a pathologist and am familiar with a couple of proprietary WSI analysis tools. Over the past few years I have been using Definiens Tissue Studio in collaboration with the image analysis team. A few weeks ago I started exploring QuPath. In the context of this talk and my recent efforts, I have a few specific questions for Pete. Would it be possible to get an appointment (~15 to 30 minutes) for a brief discussion? Thank you for organizing this informative webinar.

For discussions about potential research collaborations etc., you can find my contact details online (look for Peter Bankhead at the University of Edinburgh).

For questions about QuPath, please do use https://forum.image.sc/tags/qupath - making sure your post has the ‘qupath’ tag. I’m afraid I really can’t answer software questions individually by email - there are just too many of them.

Q20: Are there online sample datasets I can have access to?
  • Are there example datasets we can have access to?
  • Is there a sample dataset available online?

You can find the sample images used in the documentation (and much of the webinar) at https://qupath.readthedocs.io/en/latest/docs/intro/acknowledgements.html.

There are some other sources of whole slide images online (e.g. TCGA, the CAMELYON and ANHIR grand challenges).

Q21: Is there any updated developer documentation (e.g. JavaDocs)?
  • Is there any updated developer documentation? Or a JavaDoc to see the API?

Not hosted online yet, but see https://qupath.readthedocs.io/en/latest/docs/reference/building.html#building-javadocs.

v0.2.0-m11 is the first version that should have (almost) every public method documented in some form.
(Since this involved writing literally thousands of javadocs over the past few months, the quality may be variable… but the aim is to be stricter in maintaining documentation from now on).

Q22: Is there a cheat sheet for shortcuts? Can I change them?
  • Can we change the shortcut key for a command?
  • I just wonder if the commands are the same for windows 10
  • Is there a cheat sheet of the shortcuts?

You can see the list of shortcuts at https://qupath.readthedocs.io/en/latest/docs/reference/shortcuts.html for via the ‘Command List’ (Ctrl/Cmd + L).

Shortcuts are the same for Mac and Windows, swapping Cmd for Ctrl.

There is no easy way to change existing shortcuts.

Working with images


Q23: Is it possible to create sub-projects within a project?
  • How to make subprojects withthin the project??

No, there is no sub-project concept within QuPath. But you can set metadata tags to arrange entries.

Q24: Is there a way to make projects self-contained, using the relative paths to images?
  • Is there a way to make the images relative to the QuPath project? So that QuPath projects can be self-contained?

v0.2.0 projects use a kind of hybrid approach already: storing both the absolute and relative paths to the image files.

When you open an image, it checks both. If you move a project, but maintain the relative locations, QuPath will still prompt you to update the paths - but it will prepopulate all the paths for you, so you just need to accept the changes by clicking one button.

This is because QuPath politely doesn’t want to change the paths stored in the .qpproj file without permission.

Q25: Can I create a project from a folder that already contains images?
  • Can I create a project from a folder that already contains images? Like opening a Folder in ImageJ?

The project folder needs to be empty, but you can then add all the images from your other folder in one import.

Q26: How many slides can be added to a project ?
  • How many slides can be upload for a project ?

There is no built-in limit, but if you have thousands of images then you may wish to split them into separate projects.

Q27: Are the key:values from OMERO automatically imported into QuPath?
  • Do the key:values from OMERO automatically import into QuPath?
  • Have you thought about how to integrate thie ‘project’ concept with the open microscopy initiative, i.e. OMERO?

No, the OMERO integration is at a very early stage. More to come in the next year!

Q28: Why does my project file look like a PDF, but still open?
  • im using 0.2.0 version and it is from the zip file i dowloaded. but now within the project folder my project file is shown as a pdf file. interstingly i can drag and open it on QuPath but still puzzeled.

This is mysterious. My guess is that the weirdness occurred in Windows, outside of QuPath, but if it persists or causes problems you can start a discussion on image.sc.


Q29: Can I change the colors of my objects to make them easier to see?
  • Can you change the colour the annotations and detections are marked when selected or not? For example with tissue detection, if the detection is selected it is bright yellow, zooming out when removing not so usefull parts of the detection, its quite hard to see this yellow on a grey background.

Yes, search for ‘color’ under Edit -> Preferences to see all the colors you can change.
Rather than highlighting selected objects with a color, you can also highlight them using a thicker line/bounding box if you prefer.

Q30: Is there a way to have the full range shown in the Brightness & Contrast histograms?
  • Is there a way to have the full range shown in the B&C histogram and not adjusted to min and max values for each image?

See https://github.com/qupath/qupath/issues/352#issuecomment-520907454

Q31: Is there a method to add custom look-up tables for measurement maps?
  • Is there a method to add custom lookup tables for measurement maps?

Not officially, but there is a trick…

Creating annotations

Q32: Can annotations overlap?
  • Can you have one annotation inside another one ?
  • Can you have overlapping annotations?


Q33: Can the behaviour of the brush/wand tool be adjusted?
  • How to make brush more or less sensitive?

By default, the width of the Brush tool depends upon the magnification at which the image is being viewed so it is more sensitive when it is zoomed in and less sensitive at higher levels of zoom. This can be changed in the Preferences by unticking ‘Scale brush by magnification’ and setting a brush diameter in pixels.

The Wand tool sensitivity can also be adjusted in the preferences.

Q34: When I draw annotations they have dotted lines and disappear once I stop drawing. Why is this happening?
  • I can’t keep the annotations, when I draw it appears with a dotter line and then disappears. How can I save it?

It sounds like you have selection mode enabled. Selection mode changes the drawing tools so that they can instead be used to select objects (e.g. to manually classify them). Turn this off by clicking the button labelled with an ‘S’ in the toolbar.

Q35: Can I adjust annotations?
  • If you run simple tissue detection, can you then edit the selected tissue with the brush or wand tool (holding down the Alt key) to exclude parts of the detected tissue from the analysis?
  • Is it possible to do corrections on annotated images ?

Yes, if the annotation is selected (and unlocked) you should be able to expand or reduce using the annotation tool of your choice. To check if your annotation is locked or unlocked, right click the annotation you want to change in the ‘Annotation’ tab on the left.

Q36: How can I create the same annotation shapes on different images?
  • Is it possible to draw a same size shape in different images in the same project?

If your annotations are rectangles or ellipses, you can use the Objects -> Annotations -> Specify annotation command.
For other shapes, see the next answer…

Q37: Can I transfer annotations made on one image to another image?
  • Can Pete demonstrate how to perform image alignment = copy/paste multiple annotations to another image?
  • Wondering if it is possible to move an annotation from H&E slide to overlay onto corresponding IHC slide in order to use the classifier from H&E slide (along the lines of virtual double staining)?

To some extent. If both images are open at the same time you can select one window (with the annotation) then the other (where it should go) and choose Objects -> Annotations -> Transfer last annotation (Shift + E).

More complex transfers of multiple objects can be done by scripting, but in all cases QuPath won’t (by itself) perform any automatic alignment at this time.

Q38: What is the rule for resolving hierarchy for a group of Points annotations? (sometimes the points are spread on different ROIs)
  • What is the rule for resolving hierarchy for a group of Points Annotation? (sometimes the points are spread on different ROIs)

If the points are spread across different annotations ROIs, then they aren’t considered to be inside any of them from the point of view of the hierarchy. In this case, it might make sense to split up the points into objects that are completely contained inside other annotations.

This could be script, or perhaps added as a command in a future version. Please start a discussion on forum.image.sc if this would be useful; the behavior of points in the hierarchy hasn’t received much attention.

Q39: Can a cell boundary be limited by the annotation it is in?
  • Can a cell boundary be limited by the annotation it is in? Especially important for preventing cellular overlap at the border of two annotation classes

Not with the current built-in cell detection algorithm, but ways to improve/constrain cell boundary estimation are planned for a future version.

See also StarDist section for progress.

Q40: Can I fill holes in annotations?
  • Can you select a tumour area but fill in the holes (smaller than a certain size) within a mass to create a ROI?

Yes, there are two commands: Fill holes (for all holes) and Remove fragments and holes (to remove holes below a certain size).
(The second command has been renamed in v0.2.0-m11 from earlier versions)

Q41: Does QuPath support collaborative annotation?
  • We are using Dropbox to create on a shared QuPath project (qpdata etc). Is there a plan to have shared Qupath projects where multiple users can annotate and collaborate together ?

Not by itself. QuPath is primarily a desktop application with a focus on visualization and analysis. I think storage and collaboration are best left to other platforms, however QuPath can and should be able to integrate with such platforms.

For example, QuPath currently supports reading images from OMERO and there is a (separate) extension using the Google Cloud API. I think using QuPath for annotations in this way will require combining with something like this.

Import & export

Q42: Can I export (possibly transformed) ROIs from QuPath for use elsewhere?
  • I would like to have annotations read in other software that controls a system with the original slide. The issue is knowing the coordinates from the annotations to the position in the other system. Can QuPath create coordinates relative to reference points, EG the corners of a slide, and export these annotations/ROI’s?

See https://qupath.readthedocs.io/en/latest/docs/advanced/exporting_annotations.html
An Affine transform can also be applied to ROIs if needed before export; please post a question on image.sc if this is needed.

Q43: How can I export annotations as ground truth images from QuPath?
  • About annotations and Stardist : can you show us how to export annotations as ground truth image from QuPath ?
  • After cell segmentation. Can we save the information to be used in another software?
  • Can you export the annotation in form of a 3D stack which then could be used in ImageJ to produce a 3D image?
  • I would like to know if there is a tutorial that can help me to generaty the binary images need it as GT for stardist ?
  • How to generate images that act like a GT for Stardist (binary images in tiff format)?
  • How are the annotations (ROIs) saved? Specifically in which format? Geojson for example? If not, do you have tools to convert to other formats?

You can do this by using the ‘TileExporter’ - see https://qupath.readthedocs.io/en/latest/docs/advanced/exporting_annotations.html.

Q44: Can I import third-party annotations (e.g. Aperio XML)?
  • How to import annotation in batch ? I have a colleague annotated images on imagescope, Leica. Now I want to analyze them on QuPath. How to import those hundreds of annotations on Qupath? Thanks!
  • Can you import third party annotations? (i.e. aperio xml)

In principle you can import annotations, but as far as I am aware there is no open specification for Aperio XML; for this reason, QuPath does not support it.

But see this user script: https://gist.github.com/DanaCase/9cfc23912fee48e437af03f97763d78e

Q45: Can I use QuPath to export images for publication?
  • How can a small area of the whole slide image be exported as a TIFF, JPEG etc for use in a publication? Can the region selected for capture be in any orientation or is area selection restricted by the orientation of the pre-set selection tools? Can the resolution of the exported image be set at the time of selection / export?

All details about writing image regions/tiles can be found in the official docs: https://qupath.readthedocs.io/en/latest/docs/advanced/exporting_images.html

It should allow some flexibility regarding the format/parameters of the export. Feel free to experiment yourself and ask on image.sc if you have a doubt!


Processing & analysis

Q46: How do we know we can trust the analysis results?
  • I am sorry but how we are sure that this approximation is real for example Soma 75 % and tumot 25%… can we compare it whis a manual annotation, can you change something like in mahcine learning

Excellent question - stay skeptical!

Image analysis lets you generate numbers, but these can be sensitive to how precisely you do the analysis (with or without machine learning).
QuPath gives you the tools to make the measurements and to visualize them, but the meaning and validation is up to the user.

This is why image analysis results (from any software) should be carefully validated and treated with caution. Doing this is challenging because the ‘truth’ can be hard to define. One way is to compare with expert evaluation. Another (in some cases) is to use clinical outcome. Both these have limitations; validation is almost always hard – but crucial.

Q47: Does QuPath edit the original files?
  • Does QuPath edit the original files or does it save it differently?

QuPath doesn’t edit the original image files. It also doesn’t save the image data in its own files; rather, QuPath stores its data along with a URI that links back to the original image.

Q48: How can I do background subtraction?
  • thanks, I would like to measure the intensity within annotations after background subtraction, is there away to do it?
  • How do you do background subtraction?

QuPath does not support changing pixel values. Some commands (e.g. cell detection) incorporate local background subtraction as part of the processing - but global background subtraction is not possible, since the pixels cannot be changed (and the pixels cannot be changed because the images can be so large).

If you require global background subtraction, you should either apply this outside of QuPath or subtract the background from relevant measurements after they have been made.

However, recent improvements in v0.2.0 mean that in the future QuPath may support more dynamic pixel transformations - such as background subtraction and normalization - provided these are applied before any other analysis steps. This is not yet a part of the main software.

Q49: Can QuPath detect the tissue itself?
  • Can Qupath detect the tissue itself?

Yes! You can use Analyze → Preprocessing → Simple tissue detection to detect the whole tissue from the background, or use pixel classification.

(This isn’t yet described in the QuPath documentation, but it’s on our todo list.)

Image type, stains & channels

Q50: How many channels can QuPath handle in fluorescence/multichannel images?
  • We can’t use Qupath for immunofluoence images ?
  • Can one anlyse up to 9 markers in one slide? Optain from Vectra Polaris?
  • Is there a limit to the number of channels in the multiplexed images?
  • For IF, how many channels can it handle? Tx

A lot! I have used it to work with up to 44 channels so far. More may be possible, but performance will depend upon the size of the image, memory available and number of channels.

Q51: Can QuPath work with reflectance images?
  • Can QuPath work with reflactance images - i image in two channels, fluoresecence and reflectance

We don’t know, we don’t have this kind of data and haven’t tried it - but we’d be interested to find out!

Q52: Can QuPath handle images that contain multiple types (e.g. fluorescence channels and something else)?
  • Can QuPath work with reflectance images?
  • How do you deal with setting the image type if you have multi-channel data with brightfield and fluorescence channels?

QuPath hasn’t been designed to handle this kind of data. It might work if the channels are all the same type (e.g. 8-bit, 16-bit) but it can’t handle a mixture (e.g. 16-bit fluorescence + RGB).

We are starting to recognize a need for this, however, although we don’t know quite how big a need. If there are important applications that would require this, we could discuss it on the forum.

Q53: Can I change the values for the color deconvolution matrix?
  • Can you change the values for the color deconvolution matrix?
  • Can you set DAB in Stain 3 or is there a specific setting for DAB images?

Yes, you can double click to change the values next to the stain in the image tab.
If you draw a small rectangle on the image first in an area containing the stain of interest, the pixels within that rectangle will be used - otheriwse you can type the values in directly.

Q54: Are you thinking about making a new stain estimate with more than 3 colors?
  • Do you think to make a new stain estimate with more than 3 colors ?

Not currently, the color deconvolution method used (by Ruifrok and Johnston) is limited to three stains. For more, there is the pixel classifier.

Q55: Can I set up a new custom image type (with specific stains) and set it to all the images in my project?
  • Can you set custom image types that have their own specific stains that one can use and save for future images?
  • Can you set up a new image type and save it? Brightfield Massons Trichrome for example?
  • Can one create new image types (say for a PAS stain?)
  • May be interesting to know if it is possible to identify the stain vectors within QPathhow to
  • I was wondering if how you can distinguish between DAB and a residual orange in the slide - is there an easy way to deconvolute the colors maybe?
  • Is there a pre-set method/workflow to quantify immune cells stained with e.g. alkaline phosphatase ?

The image types are fixed, but if yours is not an exact match then just choose the one that is closest.

You can set the stains to be anything (up to a maximum of 3 for color deconvolution), and the necessary command will be logged to the ‘Command History’ so you can apply these in a short script to many images across a project (see https://qupath.readthedocs.io/en/latest/docs/scripting/workflows_to_scripts.html).

Q56: What does 'Other' image type mean?
  • What is other image type means?

Anything that isn’t covered by the more specific types. In practice, it isn’t really needed. For biological applications, you can select ‘Brightfield’ for anything where the interesting features are darker than the background, and ‘Fluorescence’ for anything where they are brighter (even if it isn’t really fluorescence).

Q57: Sometimes when I open an H&E image, QuPath does not offer an H&E stain option, why?
  • For some H&E images when opened, Qupath does not offer an H&E stain option

The H&E option should be available for any 8-bit RGB image. Some brightfield images are not stored as ‘standard’ RGB; for example, I have seen some (Zeiss microscope?) images that are 12-bit.

Currently, you would need to convert these images to 8-bit RGB outside of QuPath for them to work. It could be solved directly in QuPath in the future - at this point we haven’t encountered the problem often enough to justify the time it would take to address.

Q58: What can I do if QuPath mistakes dark blue for dark brown with H-DAB images?
  • I have a question about counting cells in H-DAB images: what can one do if QuPath counts really intensely stained negative cells as positive, so it thinks a dark blue would be brown (DAB+)?

You could try setting the stain vectors manually to see if that helps. Otherwise this might need to be addressed at the sample preparation stage.

If you can share sample images demonstrating the issue then it could be discussed on image.sc.

Q59: What do the numbers in the Hematoxylin/eosin/residual lines mean?
  • What does the numbers in the Hematoxylin and eosin lines in the image description mean?

The numbers are the ‘stain vectors’ used to characterise the color of each stain.

This is not QuPath-specific. Gabriel Landini has a great description of what they mean at https://blog.bham.ac.uk/intellimic/g-landini-software/colour-deconvolution/

Q60: Can I set up QuPath so it automatically sets the type of all the imported images to what I want?
  • Can you set a default type if you always use H&E for example rather than an auto estimate?
  • Can you set it up in a way that all imported imaged are typed according to one that you select. For example, If you only use H&E stainings, set it up that all images will be assigned that type

You can, during import of the slides.

If you forget, you can set the type in one image and generate a script from the ‘Command History’. The line of the script that sets the image type should be clear; then you can run this script across your entire project to apply it to all images (Run -> Run for project).

Cell detection

Q61: My cells aren't round, can I use cell detection?
  • I have noticed the positive cell detection tool doesn’t deal well with irregular shape cells such as microglia and astrocytes. Is there a way to modify the setting to allow for better detection of these margins?
  • For immunofluorescence images, cells are of different shapes and sizes. sometimes nucleus detection and expanding it further will not work. Is there any possibility of adding ‘ACTIVE CONTOURS’ option to QuPath (similar to Icy software) ?
  • Would the cell detection method work for cell with various morphologies such as neurons ?

You can try it, but beware the built-in cell detection is very general and may not work well for your images - it is best with ‘roundish’ nuclei. Check out the QuPath YouTube channel for an in-depth description of all the cell detection parameters that can be adjusted.

You can also develop your own custom methods of cell detection, e.g. by writing scripts or using ImageJ macros.

However, adding more specialised and powerful methods of cell detection is a high priority; StarDist is one example of this.

Q62: Can QuPath detect things other than cells?
  • Can change or specify child objects as something other then ““cells”” ? Often you might want ot detect particles that are not cells, so the denomination is awkward in that case.
  • Is non-nuclei based cell detection supported?
  • is there a nuclear detectior option ?
  • Which parameters are used for single cell segmentation - not nuclei,?

Yes! ‘Cells’ are an example of a general object referred to as ‘Detections’. You can create detections by other means (e.g. pixel classification).

Cells and other detections act just the same way for most things, e.g. you can train an object classifier or make measurements for detections of any kind, not only cells.
See https://qupath.readthedocs.io/en/latest/docs/concepts/objects.html for details.

If you want to detect nuclei only, you can use the cell detection command but set the ‘Cell expansion’ parameter to zero.

Q63: Is it ok to change the cell detection parameters?
  • Is ok to change the cell detecion parameters?

Yes and you should! Each project is different and your parameters should reflect this.

In general, I would recommend trying to keep detection parameters the same for a single project or study – but whether this will work well depends a lot on staining consistency.

If you find that you can only achieve good detection results by adjusting parameters on a per-image basis, then I would say the most important thing is to be clear about what you have changed and why… and the reader/reviewer can decided if it is justified. Others may disagree; see also [NEUBIAS Academy@Home] Webinar "In defence of the scientific integrity of image data and analysis" Q&As :slight_smile:

Q64: How can the boundary of the cell be identified accurately?
  • Is there a cell detection method available/planned that takes into account signal intensity in addition to distances only?
  • Hi, in the cell measurement info it includes both the nuclei and the cytoplasm?
  • Can you do positive cytoplasmic stain detection as well as nuclear stain? Since detecting cell boundaries may be harder for the software.
  • What cell expansion means?

QuPath’s built-in cell detection expands the nucleus by a fixed distance to approximate the cell boundary. Measurements are made both within the nucleus and the (estimated) cytoplasmic area.

The ‘Cell expansion’ parameter controls this distance, and can be used to help restrict cytoplasmic measurements to be close to the nucleus.

There is not currently a built-in way to use the intensity information in the image to improve this boundary estimatation, but it is certainly something we are thinking about. I speak about it in this presentation.

Q65: Can I run cell detection on the whole image or do I need to create an annotation first?
  • Can I run the cell analyzer tool on the WSI or do I need to choose a ROI annotation before?

You need to create an annotation, but this can be generated automatically. For example, using the Create full image annotation command.

Q66: Can I merge detections?
  • Is it possible to merge the adjacent positive cells into a single annotation?

Annotations can be merged (use the ‘Merge selected’ command). Cells can’t be easily merged. Cells are distinct from annotations; see https://qupath.readthedocs.io/en/latest/docs/concepts/objects.html

Q67: Can you correct false negatives (missing cells) after cell detection?
  • what about false negatives? Can you correct for this if you see some cells are missing?

Not really - you would need to run the cell detection again with different parameters. However, if you over detect then you can remove the false positives later.

For that reason it is usually better to err on the side of detecting too much rather than too little, assuming you’ll deal with the errors later.

Object & pixel classification

Q68: Is the pixel classifier also available in the previous version of QuPath?
  • Is the pixel classifier also available in the previous version of QuPath?

No, it is a new feature in v0.2.0.

Q69: Can I click on false positives to retrain the algorithms on the fly?
  • Can I click on false positives to retrain the algorithms on the fly?

Yes. Draw an annotation around the false positives, and assign a classification to the annotation.

Q70: Can I use the pixel classifier to identify stain colour deconvolution parameters?
  • Can one use the pixel classifier to identify stain colour deconvolution parameters?

Not the pixel classifier, but you can draw a small rectangle on any stained area and double-click on the stain vector under the ‘Image’ tab to set the stain based on the pixels in the rectangle.

There is also an ‘Estimate stain vectors’ command that can be used to estimate two stains based upon a rectangle that contains ‘a bit of everything’ (first stain, second stain and background).

Q71: Can this pixel classifier be saved and reproduced in other platforms?
  • Can this pixel classifier be saved and reproduced in other platforms?

It can be saved and re-run within QuPath. The feature calculations and other aspects of the classification have all been implemented within QuPath and are not available elsewhere.

Q72: Can we use the pixel classification on H&E?
  • Can we do this classification on H&E?

Yes, you can use the pixel classifier on any image, with any staining.

Q73: Can I save the classifier and then improve that same classifier by training on other images?
  • Can you edit classifiers as you get new images?
  • Can you save the classifier and then improve that same classifier by training on other images?

Not currently, you would need to retrain the classifier from scratch. But if you keep your old annotations these can be used to contribute to the new classifier as well.

The ‘new’ classifiers in v0.2.0 don’t make this as easy as they should, so I’ve created an issue to track progress at https://github.com/qupath/qupath/issues/462

Q74: Can I generate annotations/detections based on the pixel classifier results?
  • Can you generate annotations/detections based on the pixel classifier results?

Yes, there’s a ‘Create objects’ button for this

Q75: Can I train my pixel classifier on multiple images/regions if one is not enough?
  • For training the pixel classifier, if one region is not enough to train an image, can one train multiple regions as was the case with axon.

Yes, you can train using multiple regions from the same image.

You can also use the ‘Create combined training image’ command to create a single image that is composed of pieces of multiple images, and annotate that for training. This helps give an overview of classifier performance across more varied data.

Q76: How many classifiers are available? Do they need to be associated to cell types or do they work for anatomical regions in general?
  • How many classifiers are available? Do they need to be associated to cell types or do they work for anatomical regions in general?

QuPath doesn’t have built-in, pre-trained classifiers because these would likely be too data-dependent - but you can train as many of your own as you like, saving them for later reuse.

They can be used for anything that QuPath can represent as an object, not just cells. But figuring out precisely how to detect other structures and train a classifier for them could be the subject of a research project in itself. For many applications, QuPath doesn’t give a ready-made solution - but rather the tools that can be used to create and share the solution.

Q77: You mentioned in your demonstration that only two annotations are enough to train a classifier, is it true?
  • I am not sure if I missed a point, but in the last example you were saying that only two annotations are enough to train the classifier?

Yes. Whether it is good classifier is another matter - often more annotations are required to train a classifier that is sufficiently accurate, but this depends upon the application.

Q78: Is the classifier relying also on other examples, or only on my annotations?
  • Is the classifier relying also on other examples?

No, it only relies on the annotations that you give it.

Q79: Can I apply the same classifier to other images or other projects?
  • Can we apply the same classifier to the other images or other projects?

Yes, you can save any classifier and load it again in another image/project.

Q80: Do you recommend using small or large training samples to train a classifier? And is it best to limit the number of samples?
  • Do you recommend using small or large training examples to train the classifier? And is it best to limit the number of examples?

I’d recommend diverse training samples, based on small annotations. Large annotations will probably give a lot of almost-identical samples.

In the ‘Advanced options’ you can specify the maximum number of training examples (which QuPath will randomly subsample from the annotations). However, it is usually better to avoid reaching this limit and to annotate more selectively instead.

Q81: Is it possible to do texture classification?
  • Is it possible to do texture classification?

Yes, you can create superpixels and add texture measurements to these (e.g. Haralick textures), before training a classifier in the same way as for cell classification. Ask on image.sc if more information is needed.

Image registration & alignment

Q82: Can QuPath do image registration?
  • Can QuPath do co-registration e.g. alignment of multiple images from adjacent slides?
  • Can QuPath perform alignment between images? For example the same piece of tissue stained with different markers.
  • Is there a way to merge images (eg from different channels) and align them in case they are slightly misaligned?
  • Is there any registration available in Qupath ?
  • Will image registration be integrated in QuPath in the future? Not just manually rotating the annotation but also with automated affine transformations? I have serial sections where transferring the annotation does not always work.

No, QuPath requires that you find another image registration solution, and then you can import your pre-registered image for further analysis.

There are no plans to change this; we simply don’t have the time or the expertise. Image registration is a difficult problem that others specialize in, and which can easily be separated from the other analysis steps.

(But we would be very interested to know if a good, open source, whole slide image-friendly registration software exists)

Q83: Can I align a reference map to an image?
  • Can one use a reference map (say the Allen brain atlas) to apply to an open image of a brain slice?

Not currently. It’s not something we’re working on as we don’t have any projects that require it. Although if we end up part of a collaboration that needs it then it’s something we’d be interested in working on…


Q84: How can I quantify stain intensity?
  • Can you quantify stain intensity (e.g. mean intensity per cell)? Which parameter would you use for a good representation of amount of staining?

QuPath’s cell detection automatically makes a range of measurements - but which are most suitable for your application will depend upon what you are wanting to measure. You could ask on image.sc.

Q85: Can I measure the circularity of annotations?
  • Can you measure the circularity of annotations?

You can get the area and perimeter of annotations, from which you can calculate the circularity.

This could be scripted if necessary; however note that this becomes more complex with certain annotation shapes (e.g. containing multiple pieces, self-intersections, holes).

Q86: Can the Measurement Exporter be shown?
  • I am super curious for the Measurement Exporter that Laura mentioned in my previous question, any chance this can be shown? :slight_smile:

Sorry, we ran out of time! But find it under Measure -> Export measurements and ask on image.sc if its use is not clear!


Q87: What language is used for scripting in QuPath?
  • Are there examples of JavaScript codes using QuPath API ?
  • Can you write a script in other language than macro-language?
  • What is the current language script usable with Qupath and will there be additional ones?

Groovy. Secretly, there has been some (very limited) support for other languages (JavaScript, Jython), but this will probably be dropped in favour of supporting one language as well as possible.

Q88: How do I batch process in QuPath?
  • Can I train a classifier based on multiple image (e.g. 10) and then apply the it on many image (e.g. 1000)?
  • I might have missed this, but can you batch process a set of files?
  • How can you run analysis, for example positive cell detection to all your annotations (selected areas) in different images in your project (not doing it image by image)?
  • Once we trained a classifier for a given image, can we apply the same to another image?
  • How to add script for classfication?
  • Is it possible to run the groovy scripts outside of QuPath (while QuPath is running of course) ?

In the script editor use Run -> Run for project, then select the images you want to process.
See https://qupath.readthedocs.io/en/latest/docs/scripting/workflows_to_scripts.html

Q89: Can I batch export images (as an overlay with detections or annotations) and results?
  • Looks like its easy to batch process the images in the project using the scripts from the workflow, but is it possible to batch export images (as an overlay with detections or annotations) and results (for example as csv)?

Yes to measurements: there’s now an Export measurements command for precisely this.

It’s possible for images too, based on the information at https://qupath.readthedocs.io/en/latest/docs/advanced/exporting_images.html
The details matter; so if you don’t find a solution there please post a question on image.sc with an exact description of what should be exported.

Q90: Are there QuPath scripts in an online respository somewhere?
  • Is there a collection of QuPath scripts that different people have written available anywhere?

Not consistently arranged, mostly because QuPath’s API is currently being updated and this can break old scripts. When there’s a v1.0.0 then there might also be a scripting repository.

For now, you can search for QuPath on GitHub or post on the forum.

Q91: Is the workflow history recorded anywhere? Is this automatic?
  • Will the sript editor automatically record the history of steps? Or will you need to activate this somewhere?

The ‘Command History’ updates automatically, but note that it doesn’t record absolutely everything (including regions that have been drawn or scripts that have been run).

It gives a starting point, mostly intended to aid batch processing in cases where the analysis is ‘straightforward’ (i.e. running a few commands in sequence).
See https://qupath.readthedocs.io/en/latest/docs/scripting/workflows_to_scripts.html

Q92: Can I use QuPath with Python?
  • Can you briefly mention the options to call python from the groovy scripting. In particular, can the pixel values (.e.g., of a cell) be exposed to python for feature extraction etc?
  • Hi, I’m particularly interested in the issue of integrating python DL models into QuPath (lets say, a simple structure segmentation network). Which method is best suited? Jython?

We experimented with different ways to connect QuPath with Python, but in the end we prefer to simply use export/import; there are ways to directly link the two worlds (e.g. javabridge, Py4J) but we found them a little bit awkward and not useful enough to use (so far). For more about exporting, see https://qupath.readthedocs.io/en/latest/docs/advanced/index.html

Jython sounds promising, but isn’t suitable (it doesn’t support Numpy, for example).

For deep learning specifically, Java bindings exist for many of the main libraries. StarDist via TensorFlow was shown in the demo; more will be added over time.

Q93: When using 'Run for project' does QuPath do any batch correction?
  • What’s your thought on combining images from multiple run batches, e.g. 30 images from run1 and 50 from run2, for the same project? Will QuPath implement some batch correction / adjustment for this purpose?

QuPath doesn’t currently include any batch correction; if this ever changes, it should be clearly documented.

Using ‘Run for Project’ each image is treated independently, it doesn’t matter how you batch them (but it might matter if you run the script multiple times for the same image)


Beyond what’s built-in


Q94: Does QuPath work with Fiji, or other ImageJ plugins?
  • Can you specify to use your own instance of ImageJ which has several plugins installed ?
  • Could I used fiji/ImageJ plugins with the groovy script (not with the Imagej macro) ?
  • Hi! Could you use fully functioning Fiji Plugins in this innate ImageJ (like MorphoLibj)?
  • If I link between ImagJ to QuPath, can I use the StarDist to segment the nuclei and return the annotations back to QuPath?
  • ImageJ installation can be upgraded to every version and include every plugin?
  • Is the imageJ instance within QuPath continuously updated? And can one add plugins to its plugin folder? Thank you
  • Is this a connection to ImageJ1.x? What about Fiji?
  • QuPath works with ImageJ. But Fiji is Just ImageJ. Can one use Fiji in QuPath?

Currently, QuPath only integrates directly with the ‘original’ ImageJ - and not with ImageJ2.

Because QuPath keeps its own version of ImageJ internally, which does not match with any one you might have installed elsewhere on your computer, it won’t necessarily have immediate access to the plugins you want. However, you can set QuPath’s ImageJ to use whichever plugins directory you like - including the plugins directory for an existing ImageJ with Extensions -> ImageJ -> Set ImageJ plugins directory

Note that this only partially works with Fiji because QuPath is unable to use ImageJ2-related features. This is not due to lack of effort, but so far all attempts have been thwarted by too many incompatible dependencies between the software applications (including the required Java version).

Q95: When I import the image back from ImageJ to Qupath, does it overlay the two images?
  • When you import the image back from ImageJ to Qupath, does it overlay the two images, or does it replace the qupathfile for the imageJ reduced size one

You can’t import the image back, only the ROIs (as QuPath objects). Therefore the image remains unchanged. QuPath will translate and rescale the ROIs if they need to.
This means you can use ImageJ to analyze an image at a low resolution, but get the results back into QuPath to view them on the full-resolution image.

No pixels are harmed in this process.

Q96: Can I create a script to batch the QuPath > ImageJ > ROI selection > back to QuPath pipeline, for all the slides in a project?
  • Can you create a process to batch that QuPath > ImageJ > ROI selection > back to QuPath for all the slides?

Yes, I wrote a blog post about this at https://petebankhead.github.io/qupath/scripting/2018/03/08/script-imagej-to-qupath.html

However the API has changed since then; it’s probably easier now. Write on image.sc if you need it.

Q97: Can I export the annotations in form of a z-stack which then could be used in ImageJ to produce a 3D image?
  • Can you export the annotation in form of a 3D stack which then could be used in ImageJ to produce a 3D image?

Yes, the ‘Send region to ImageJ’ command can optionally export z-stacks, including all annotations as ImageJ ROIs (add to an Overlay).

Q98: Can I define a region based on a thresholded heatmap?
  • Can you export the heat maps or send them to ImageJ for further processing ? E.g define a region based on a thresholded heatmap ?

In principle yes, via ImageJ/scripting. But precisely how will depend upon defining exactly what kind of heatmap is needed. Ask on image.sc if you need it.

Deep learning

Q99: Can I use other deep learning models with both QuPath and other software?
  • Are those tensorflow algorithms applicable to other softwares? (e.g. HALO)

That will really depend upon the model and the other software… see https://bioimage.io for a project aiming to establish a model zoo for bioimage analysis.

Q100: Can I use other deep learning models with QuPath?
  • Which deep neural networks are available in Qupath? Can we customize them or import a custom network into Qupath?
  • Are deep learning models coming anytime soon?
  • Yesterday I attended to the Neubias webinar ““Introduction to nuclei segmentation with StartDist””, and they were using StarDist (or at least their own version) in QuPath. This morning I’ve checked the QuPath’s Github and I’ve watched you are incorporating tensorflow and, in particular, StarDist in QuPath. My question is about incorporating other (custom) deep learning models in QuPath, for instance, as is performed in ImageJ using DeepImageJ. Is it possible (or it will be possible) in a future milestone?

We’re working on this - it should be possible in future releases!


Q101: How do I use StarDist with QuPath?
  • Are these StarDist models the same pre-trained models used in the StarDist Fiji plugin? Only applying those pre-trained models for the moment?
  • Can we get this Stardist script for qupath ?
  • can you explain better stardist used in qupath
  • Please, share instructions of how to use stardist in QuPath for nucleus segmentation!!
  • Would it be more feasible to implement the StarDust code into the QuPath?
  • This alternative cell detection is great. Will it be implemented in the last release? or will you make a tutorial available online?
  • Is there a way from the segmentation done using other tool like stardist to still expand the cells to deetc the cytoplams ?
  • Please do post the code for implementing StarDIst into QuPath!
  • will stardist script be integrated in QPAth?

This is all now written up and explained at https://qupath.readthedocs.io/en/latest/docs/advanced/stardist.html


Specific applications

Q102: Can I use QuPath with imaging mass cytometry samples?
  • Can I use QuPath with imaging mass cytometry samples?

It’s worth a try! I’ve successfully opened, viewed (with the channel viewer), detected & measured cells in imaging mass cytometry samples using QuPath.

Q103: Can I quantify images with QuPath?
  • Can we quantify the images ?

Potentially yes… but first you need to define precisely what you want to quantify.

Q104: Can QuPath be applied to images from fields other than biology?
  • can you use it if the object is not biological i.e. im looking at fluorescence from chemicals in fibers

Yes, many of the tools in QuPath can be applied to different kinds of image.

Q105: Is it always CD3 and CK on spatial analysis?
  • is it always CD3 and CK on spatial analysis?

No, QuPath just gives the basic commands, which are designed to be general-purpose - you can combine and apply them for any staining.

Q106: Is there a collagen detection tool in QuPath?
  • Is there a collagen detection tool similar with the one for the cells?

No… but QuPath contains the pieces you might need to create a custom one for your application (e.g. pixel or superpixel classification)

Q107: Is there a way I can correlate Haralick feature analysis with distance to annotations?
  • Is there a way you can correlate Haralick feature analysis with distance to annotations?

All the measurements should be visible if you create a measurement table. You can export this and investigate correlations however you might wish.

Q108: Which are the best platforms to work with multiplex data extacted from QuPath?
  • Which are the best platforms to work with multiplex data extacted from QuPath?

I don’t have an answer, but you could ask the user community on image.sc

Q109: Is there any way to train QuPath to detect spots within cells?
  • Is there any way to train QuPath to detect spots within cells? Like in an RNA Scope or Base Scope Image?
  • can you identify foci into nuclei in fluorescence image and count the numbers of foci/cell

For now there is a ‘Subcellular detection’ command… but it is rather limited. There are plans to improve this!

Q110: Can QuPath <specific research project question>?
  • Hi, I would like to measure a diameter of vessel in the mouse brain in response to calcium influxes. Would that be possible to use QuPath?
  • Is the Qupath able to count mitoses in the melanoma you have shown?
  • How would you diferentiate tumor cells and endotelial cells after doing positive detection?
  • For CD3, can I get the % positive of lymphocytes by number of positive immune cells / total number of immune cells?
  • Maybe it was explained but I missed, can one classify different cell types according to the level of expression of different markers? Stem cell: high Marker 1, high Marker 2, low maker 3. differentiated cells: loq 1, low 2, high 3. And then measdure distance of macrophages to these cell types defined?
  • How would you on a single fluorescent image with multiple channels to also incorporate a stromal protein identification (using say the pixel classifier) so you then have the ability to quantify cell phenotypes within areas/annotations high or low for the stromal protein of interest?
  • Is it possible to classify and analyze Collagen-Type-I with the positive cell detection if I trained the classifier before? If yes, what is the best way to do this?

Possibly - please start a conversation on image.sc! The details are usually important, so these would need a longer discussion. There are some forum discussions here.

However, you should know two things:

  • QuPath is developed by researchers in academia. The built-in functionality depends upon projects we are working on… and those projects depend upon our collaborations and funding.
  • The goal of QuPath isn’t (primarily) to offer out-of-the-box solutions to specific problems, but rather to give the tools to solve a much wider range of problems. If you have a specific question you want to answer from your images, QuPath can help you answer that question - but it might still require a lot of additional work.

That said, the user community have already come up with lots of creative ways to apply and extend QuPath. And as the software continues to be developed, more and more complex analyses become possible.

Otherwise, if you’re working on a hard-but-important analysis problem, maybe we could collaborate to solve it together – making the solution part of a future QuPath release :slight_smile:


4 posts were split to a new topic: Updating an existing object classifier