The QuPath webinar for NEUBIAS Academy is now on YouTube.
Approximately 200 questions were asked at the time. Over the past week we’ve been arranging these into theme and answering as many as we can… resulting in this post.
Sometimes we’ve grouped questions and/or reworded them for ease of navigation, but we’ve included the original wording for completeness and clarification. I apologise if we’ve missed your question or haven’t done it justice in our (necessarily short) answer; feel free to start a new topic here if you want to discuss anything further.
Apart from that, I’d like to give a huge thanks to @Ofra_Golani, @lmurphy and @melvingelbard for all their work as moderators and in helping put this together, and @Julien_Colombelli, @oburri and everyone else at NEUBIAS who are making the webinars happen !
- Working with images
- Processing & analysis
- Beyond what’s built-in
- Specific applications
Q1: When will the new version be available on Qupath's webpage?
- When the new version will be avialable in Qupath webpage ?
You can access the latest version here - https://github.com/qupath/qupath/releases/latest
The next stable release is planned in the next few days… so please download the latest milestones and report any bugs quickly so they can be fixed in time!
Q2: How do I install & update QuPath?
- If I already run QuPath, how do I update it? Do I need to uninstall the old one and then install the new?
There are installation instructions on the download page and at https://qupath.readthedocs.io/en/latest/docs/intro/installation.html
You can have multiple versions of QuPath installed at the same time, but it is not recommended to mix versions for analysis. Some commands may have changed behavior, and there can be incompatibilities in file formats/projects.
In general, if you edit something in a newer version then the file might not be opened with an older version.
But if you need to import data from an older version to a newer one (e.g. because you want to use annotations you made previously) this is possible. In v0.2.0-m11 there is a new Import images from v0.1.2 command to help with this.
Q3: Are there any differences when QuPath runs under Windows, Mac or Linux?
- Are there any diferences when QuPath runs under Windows, Mac or Linux?
They should be equivalent. However, QuPath is primarily developed on Windows and Mac. The Linux version has been tested less by the developers.
Q4: Which image formats are supported by QuPath?
- All formats can be open by QuPath? (.czi, .tif, …)
- Does it support all kinds of image files taken on different microscopes? for example .lif file from leica?
- Does QuPath support Bioformat, similar to ImageJ, so we can import proprietary files , such as .lif (by Leica microscopes)?
- I have a curiosity (I’m sorry I’m very naive in the field): which image file formats are supported by QuPath? Are there any limitations? Could I, for examples, use files directly exported from an Axioscan acquisition (in the original format)?
- Is there a list with the supported file formats?
QuPath supports many formats, with the help of Bio-Formats and OpenSlide. You can see a full list of supported formats in the official docs: https://qupath.readthedocs.io/en/latest/docs/intro/formats.html
Q5: If my images are multispectral images, do I still follow the same file import pipeline?
- If your starting images are a multiplespectral Images, do you still follow the same file import pipeline?
You import images the same way in QuPath. QuPath doesn’t contain any specific spectral unmixing algorithms; depending upon what you need to do, you may have to perform this step elsewhere and import the unmixed images.
Q6: Can QuPath read image metadata from a file?
- Can metadata about the images be read in from a file?
Yes, on the left of the software there is an Image tab under which you can see the main metadata that has been read (as long as that file format is supported).
Q7: Does drag & drop influence which image reader is used?
- Is drag & drop causing any issue with metadata recognition? Are images ““Bio-Formats interpreted”” automatically?
When importing an image to a project, an import dialog will automatically appear from which you can optionally specify which library is used to open the image from an ‘Image provider’ drop-down list.
If you leave this as the default, then QuPath will choose what it thinks is the ‘best’ image reading library for the particular file format. For TIFF-based formats, this involves checking the metadata so that it will use ImageJ to read images that were previously written by ImageJ, Bio-Formats for OME-TIFF, and Bio-Formats or OpenSlide for most others.
(Basically, QuPath tries to avoid the issues Fiji has in this regard - but specifying the import library from the drop-down menu is the easiest way to control it).
Q8: Can I combine different scenes (e.g. generated by the Axio Scan (Zeiss)) into one?
- Can you combine different scenes (which e.g. is generated by the Axio Scan (Zeiss) into one?
Not currently, but it’s something I’ve been thinking about and could be discussed for a future version.
It is not entirely straightforward to solve in a general way - see Improve Bio-Formats Image Position Metadata
Q9: Can QuPath open mrxs files with 4 or more channels?
- QuPath has a problem to open mrxs. files with 4 chanles, do you know how to solve this problem?
Q10: There is a square pattern on top of the image. Is this an artefact of the slide scanner?
- There seems to be a square pattern on top of the image, visible at high zoom. I have seen this in other images from slide scanners. Is this an artefact of the slide scanner?
I’m not sure what pattern you’re referring to or which image; there’s an ‘overview’ square on the top right, which you can turn on or off from the toolbar. But it’s also quite possible there are artifacts in the slide as well.
Q11: Do image files need to be saved onto your computer in order for QuPath to access them, or can they remain on a server/cloud-based storage?
- Do image files need to be saved onto your computer in order for QuPath to access them, or can they remain on a server/cloud-based storage?
- Do you need to download this in the computer that is connected to the microscope? Or can it be done it offline after we collect the images ?
QuPath can be used offline, as long as the images are stored in an accessible location (e.g. if the images are on a remote server, QuPath will not be able to read them without a connection to the server).
QuPath includes some built-in support for images hosed by OMERO (using its web API; only RGB is supported). There is also a custom extension for the Google Cloud API (created separately).
Q12: Can QuPath be used to analyse Z-stacks?
- Also can QuPath analyse z-stacks
- Also, can QuPath analyze 3D images as well?
- Can QuPath work with image stacks?
QuPath can open z-stacks and time series (using Bio-Formats), and a slider will appear in the top left to navigate them. Any regions of interest that are created have their locations stored, so they related to the correct slice, and you can also apply the pixel classifier.
But the analysis remains primarily 2D; more sophisticated 3D work is likely to require scripting (or other software).
Q13: Why does QuPath use so much memory when importing TIFFs?
- Hi I was wondering if you have encountered memory problem while importing tif images into the project? I found it used much memory proportional to the number of images imported and needs to manually clear the memory in QuPath. Is there any solution to it or is it already fixed in the new version? Thank you!
- Importing multiple tif images into the project at the same time will use a lot of memory. Is there a way not to use memory when importing?
It depends upon the TIFFs… if they are pyramidal, memory use should generally be quite modest.
An issue with importing large, non-pyramidal TIFFs was reported this week Several images are "Image null" when loading in QuPath
It was fixed within 2 hours, and the fix will be in the next release - so if a problem remains, please report it promptly and clearly and we’ll try to sort it out!
Q14: Can I see the live RAM usage of QuPath?
- Hi, can you see live RAM usage of QuPath?
Yes! View → Show memory monitor
Q15: I sometimes have problems with analysis or export. What are the hardware requirements for QuPath?
- How can you improve the overall performance of qupath? I have an i7 macbook pro with 16GB ram and sometimes have problems with the analysis
- Hi, I was wondering what are the preferred recommendations for the hardware on the computer QuPath is working on. I know it says i7 processor and >16 GB RAM, but my computer has that, I made available 12 GB RAM for the programme and I still run into ““java heap space”” errors when using some scripts. Is this still due to the hardware or something else is going wrong? (The script is one to merge results files into 1 results file, in my case annotations results are in the results files). Thanks
Performance problems can often be resolved by approaching things in a different way; for example, the new Export measurements command may be much more efficient than old methods of scripting export. It is a good idea to post details on image.sc for any specific problems.
For a discussion on hardware requirements, see Good pc for QuPath v.0.2.0 M9 and future versions
Q16: How easy is it to run some of the functions in parallel (multi-threaded)?
- How easy to run some of the functions in parallel, e.g. multi-threaded, so CPU threads are involved in calculations?
Many functions already do run in parallel, but not all. It can depend upon the size of the region being analyzed (e.g. cell detection for a large region will be divided into tiles, and the tiles processed in parallel; but a small region will not be parallelized).
QuPath will take care of this, you don’t need to do anything extra.
Q17: Does QuPath use my graphic card (GPU)?
- Congrats for QuPath! What about GPU optimization (making use of NVIDIA CUDA, for example), do you have any plans for it?
- Does it use GPU or CPU only for training and classification ?
If building QuPath with TensorFlow, you can optionally use the GPU version (although it is very fussy about cuDNN…).
For everything else, our current focus is the stability and functionality – and finding efficient ways to do things that don’t require any particular hardware. But we realise that will have limits and we are certainly also thinking about GPUs. However, many bottlenecks depend upon things that cannot be solved by the GPU alone (e.g. reading image tiles, the user interface thread).
The interactive machine learning uses OpenCV as the processing library, which uses the CPU (but highly-optimzed). It is designed so that other machine learning libraries could potentially be used, if suitable extensions are written.
Q18: Why are some commands marked as 'deprecated'?
- The two available feature extraction methods have been marked as deprecated in m10. What is the future of feature extraction in Qupath? Can we call feature extraction methods from ImageJ in Qupath groovy ?
- What does it mean that ‘simple tissue detection’ is flagged as ‘deprecated’ in the newer versions (m10/m11)? Is it still usable and supported?
The ‘deprecated’ flag acts as a warning that the days of that command are numbered… it is likely to be removed in a later version.
The reason for this is usually that it a) isn’t considered terribly useful, or b) has been replaced by a better alternative, c) will be replaced by a better alternative soon. Removing old commands helps make QuPath more maintainable, and creates space for new features to be added without the menus becoming excessively clogged up.
If you find a command you particularly need has been marked as deprecated, feel free to ask on the forum why and discuss its future.
Q19: How can I contact you directly?
- Can you provide us with the contact information if we have query regarding QuPath?
- Hello moderators. I am a pathologist and am familiar with a couple of proprietary WSI analysis tools. Over the past few years I have been using Definiens Tissue Studio in collaboration with the image analysis team. A few weeks ago I started exploring QuPath. In the context of this talk and my recent efforts, I have a few specific questions for Pete. Would it be possible to get an appointment (~15 to 30 minutes) for a brief discussion? Thank you for organizing this informative webinar.
For discussions about potential research collaborations etc., you can find my contact details online (look for Peter Bankhead at the University of Edinburgh).
For questions about QuPath, please do use https://forum.image.sc/tags/qupath - making sure your post has the ‘qupath’ tag. I’m afraid I really can’t answer software questions individually by email - there are just too many of them.
Q20: Are there online sample datasets I can have access to?
- Are there example datasets we can have access to?
- Is there a sample dataset available online?
You can find the sample images used in the documentation (and much of the webinar) at https://qupath.readthedocs.io/en/latest/docs/intro/acknowledgements.html.
There are some other sources of whole slide images online (e.g. TCGA, the CAMELYON and ANHIR grand challenges).
Q21: Is there any updated developer documentation (e.g. JavaDocs)?
- Is there any updated developer documentation? Or a JavaDoc to see the API?
Not hosted online yet, but see https://qupath.readthedocs.io/en/latest/docs/reference/building.html#building-javadocs.
v0.2.0-m11 is the first version that should have (almost) every public method documented in some form.
(Since this involved writing literally thousands of javadocs over the past few months, the quality may be variable… but the aim is to be stricter in maintaining documentation from now on).
Q22: Is there a cheat sheet for shortcuts? Can I change them?
- Can we change the shortcut key for a command?
- I just wonder if the commands are the same for windows 10
- COM+SHIFT MAC?WHAT ABOUT WINDOWS?
- Is there a cheat sheet of the shortcuts?
You can see the list of shortcuts at https://qupath.readthedocs.io/en/latest/docs/reference/shortcuts.html for via the ‘Command List’ (Ctrl/Cmd + L).
Shortcuts are the same for Mac and Windows, swapping Cmd for Ctrl.
There is no easy way to change existing shortcuts.
Q23: Is it possible to create sub-projects within a project?
- How to make subprojects withthin the project??
No, there is no sub-project concept within QuPath. But you can set metadata tags to arrange entries.
Q24: Is there a way to make projects self-contained, using the relative paths to images?
- Is there a way to make the images relative to the QuPath project? So that QuPath projects can be self-contained?
v0.2.0 projects use a kind of hybrid approach already: storing both the absolute and relative paths to the image files.
When you open an image, it checks both. If you move a project, but maintain the relative locations, QuPath will still prompt you to update the paths - but it will prepopulate all the paths for you, so you just need to accept the changes by clicking one button.
This is because QuPath politely doesn’t want to change the paths stored in the .qpproj file without permission.
Q25: Can I create a project from a folder that already contains images?
- Can I create a project from a folder that already contains images? Like opening a Folder in ImageJ?
The project folder needs to be empty, but you can then add all the images from your other folder in one import.
Q26: How many slides can be added to a project ?
- How many slides can be upload for a project ?
There is no built-in limit, but if you have thousands of images then you may wish to split them into separate projects.
Q27: Are the key:values from OMERO automatically imported into QuPath?
- Do the key:values from OMERO automatically import into QuPath?
- Have you thought about how to integrate thie ‘project’ concept with the open microscopy initiative, i.e. OMERO?
No, the OMERO integration is at a very early stage. More to come in the next year!
Q28: Why does my project file look like a PDF, but still open?
- im using 0.2.0 version and it is from the zip file i dowloaded. but now within the project folder my project file is shown as a pdf file. interstingly i can drag and open it on QuPath but still puzzeled.
This is mysterious. My guess is that the weirdness occurred in Windows, outside of QuPath, but if it persists or causes problems you can start a discussion on image.sc.
Q29: Can I change the colors of my objects to make them easier to see?
- Can you change the colour the annotations and detections are marked when selected or not? For example with tissue detection, if the detection is selected it is bright yellow, zooming out when removing not so usefull parts of the detection, its quite hard to see this yellow on a grey background.
Yes, search for ‘color’ under Edit -> Preferences to see all the colors you can change.
Rather than highlighting selected objects with a color, you can also highlight them using a thicker line/bounding box if you prefer.
Q30: Is there a way to have the full range shown in the Brightness & Contrast histograms?
- Is there a way to have the full range shown in the B&C histogram and not adjusted to min and max values for each image?
Q31: Is there a method to add custom look-up tables for measurement maps?
- Is there a method to add custom lookup tables for measurement maps?
Not officially, but there is a trick…
- Make sure you have a user directory set in preferences.
- Create a subfolder inside the user directory and call it ‘colormaps’
- Create a tab-delimited (.tsv) that looks like the ones found here
- Put the .tsv file in a the ‘colormaps’ folder.
Q32: Can annotations overlap?
- Can you have one annotation inside another one ?
- Can you have overlapping annotations?
Q33: Can the behaviour of the brush/wand tool be adjusted?
- How to make brush more or less sensitive?
By default, the width of the Brush tool depends upon the magnification at which the image is being viewed so it is more sensitive when it is zoomed in and less sensitive at higher levels of zoom. This can be changed in the Preferences by unticking ‘Scale brush by magnification’ and setting a brush diameter in pixels.
The Wand tool sensitivity can also be adjusted in the preferences.
Q34: When I draw annotations they have dotted lines and disappear once I stop drawing. Why is this happening?
- I can’t keep the annotations, when I draw it appears with a dotter line and then disappears. How can I save it?
It sounds like you have selection mode enabled. Selection mode changes the drawing tools so that they can instead be used to select objects (e.g. to manually classify them). Turn this off by clicking the button labelled with an ‘S’ in the toolbar.
Q35: Can I adjust annotations?
- If you run simple tissue detection, can you then edit the selected tissue with the brush or wand tool (holding down the Alt key) to exclude parts of the detected tissue from the analysis?
- Is it possible to do corrections on annotated images ?
Yes, if the annotation is selected (and unlocked) you should be able to expand or reduce using the annotation tool of your choice. To check if your annotation is locked or unlocked, right click the annotation you want to change in the ‘Annotation’ tab on the left.
Q36: How can I create the same annotation shapes on different images?
- Is it possible to draw a same size shape in different images in the same project?
If your annotations are rectangles or ellipses, you can use the Objects -> Annotations -> Specify annotation command.
For other shapes, see the next answer…
Q37: Can I transfer annotations made on one image to another image?
- Can Pete demonstrate how to perform image alignment = copy/paste multiple annotations to another image?
- Wondering if it is possible to move an annotation from H&E slide to overlay onto corresponding IHC slide in order to use the classifier from H&E slide (along the lines of virtual double staining)?
To some extent. If both images are open at the same time you can select one window (with the annotation) then the other (where it should go) and choose Objects -> Annotations -> Transfer last annotation (Shift + E).
More complex transfers of multiple objects can be done by scripting, but in all cases QuPath won’t (by itself) perform any automatic alignment at this time.
Q38: What is the rule for resolving hierarchy for a group of Points annotations? (sometimes the points are spread on different ROIs)
- What is the rule for resolving hierarchy for a group of Points Annotation? (sometimes the points are spread on different ROIs)
If the points are spread across different annotations ROIs, then they aren’t considered to be inside any of them from the point of view of the hierarchy. In this case, it might make sense to split up the points into objects that are completely contained inside other annotations.
This could be script, or perhaps added as a command in a future version. Please start a discussion on forum.image.sc if this would be useful; the behavior of points in the hierarchy hasn’t received much attention.
Q39: Can a cell boundary be limited by the annotation it is in?
- Can a cell boundary be limited by the annotation it is in? Especially important for preventing cellular overlap at the border of two annotation classes
Not with the current built-in cell detection algorithm, but ways to improve/constrain cell boundary estimation are planned for a future version.
See also StarDist section for progress.
Q40: Can I fill holes in annotations?
- Can you select a tumour area but fill in the holes (smaller than a certain size) within a mass to create a ROI?
Yes, there are two commands: Fill holes (for all holes) and Remove fragments and holes (to remove holes below a certain size).
(The second command has been renamed in v0.2.0-m11 from earlier versions)
Q41: Does QuPath support collaborative annotation?
- We are using Dropbox to create on a shared QuPath project (qpdata etc). Is there a plan to have shared Qupath projects where multiple users can annotate and collaborate together ?
Not by itself. QuPath is primarily a desktop application with a focus on visualization and analysis. I think storage and collaboration are best left to other platforms, however QuPath can and should be able to integrate with such platforms.
For example, QuPath currently supports reading images from OMERO and there is a (separate) extension using the Google Cloud API. I think using QuPath for annotations in this way will require combining with something like this.
Q42: Can I export (possibly transformed) ROIs from QuPath for use elsewhere?
- I would like to have annotations read in other software that controls a system with the original slide. The issue is knowing the coordinates from the annotations to the position in the other system. Can QuPath create coordinates relative to reference points, EG the corners of a slide, and export these annotations/ROI’s?
An Affine transform can also be applied to ROIs if needed before export; please post a question on image.sc if this is needed.
Q43: How can I export annotations as ground truth images from QuPath?
- About annotations and Stardist : can you show us how to export annotations as ground truth image from QuPath ?
- After cell segmentation. Can we save the information to be used in another software?
- Can you export the annotation in form of a 3D stack which then could be used in ImageJ to produce a 3D image?
- I would like to know if there is a tutorial that can help me to generaty the binary images need it as GT for stardist ?
- How to generate images that act like a GT for Stardist (binary images in tiff format)?
- How are the annotations (ROIs) saved? Specifically in which format? Geojson for example? If not, do you have tools to convert to other formats?
You can do this by using the ‘TileExporter’ - see https://qupath.readthedocs.io/en/latest/docs/advanced/exporting_annotations.html.
Q44: Can I import third-party annotations (e.g. Aperio XML)?
- How to import annotation in batch ? I have a colleague annotated images on imagescope, Leica. Now I want to analyze them on QuPath. How to import those hundreds of annotations on Qupath? Thanks!
- Can you import third party annotations? (i.e. aperio xml)
In principle you can import annotations, but as far as I am aware there is no open specification for Aperio XML; for this reason, QuPath does not support it.
But see this user script: https://gist.github.com/DanaCase/9cfc23912fee48e437af03f97763d78e
Q45: Can I use QuPath to export images for publication?
- How can a small area of the whole slide image be exported as a TIFF, JPEG etc for use in a publication? Can the region selected for capture be in any orientation or is area selection restricted by the orientation of the pre-set selection tools? Can the resolution of the exported image be set at the time of selection / export?
All details about writing image regions/tiles can be found in the official docs: https://qupath.readthedocs.io/en/latest/docs/advanced/exporting_images.html
It should allow some flexibility regarding the format/parameters of the export. Feel free to experiment yourself and ask on image.sc if you have a doubt!