QuPath Intro: Choose your own analysis(adventure)

Good enough for now, dig in!

Please create your own thread to ask questions. I will expand this in the future in order to fill in a few rough patches, and as new features become available, or requests are made, but it is not intended to be exhaustive. For further information, the Wiki, Pete’s blog, and Pete’s YouTube tutorials are all useful. Note, this is almost entirely dependent on my Windows10/7 point of view. Some instructions may vary for Mac/other OS options.


Intended as a guide for new users with links to topics of interest, hopefully streamlining the experience. If you see anything that should be corrected, please message me!

Some quick links (I am hoping to figure out how to make a map at some point…)
Which version should I use?
Starting a project
Objects in QuPath

I want to get to the analysis

Cell detection command
Generating summary data for a whole project
Generating features

Citing QuPath


Which version should I use?

There are currently 2 main variants available for download here:


is the current stable release, and probably the best one to get started with. Most publications use this version. It does require the BioFormats package and extension for many file types. If your files do not open, the first thing to check is whether you have BioFormats correctly installed.
0.1.3 (doesn’t really exist) has some significant improvements over 1.2 in certain areas (positive pixel detection, drag and drop folders for projects), but requires building it yourself using Gradle (more details here in Pete’s blog if you want to pursue this).

Projects in 0.1.2 and 0.1.3 are mostly compatible. There is a hard break with all projects in 0.2.0m1 and above.


The newer, possibly less stable versions (they are mostly fine :slight_smile: ). They have significantly improved features, but not everything is working. It doesn’t require BioFormats to be added separately (built in, and in fact attempting to add BioFormats can break it), but does have a few in progress features that either are not scriptable or are not entirely working. Use with caution, not a great idea for long term projects unless you need particular features as they will likely not be forward or backwards compatible. Even between m# versions, as you might see if you try to certain scripts from m2 in m3 :slight_smile:

M4 Out now! More details here: https://petebankhead.github.io/qupath/2019/08/20/fourth-milestone.html


If you use m2, the projects are far less mobile as fixed file paths were used, as opposed to relative file paths. That means that even if you stored the images within the project folder, you couldn’t move the project folder to another computer/drive/location without editing the text file, see here for an example.

There is a script here which allows conversion of projects from m2 to m3, for anyone wanting to swap to the newer version.

Would you like to read about the interface, learn more about projects, or perhaps skip straight to the analysis?


Overview: The menus are at the top, a button bar with a variety of quick access options is below that, the Analysis panel is on the left (in yellow), and the Viewer takes up most of the screen space in the lower right.

Menus at the top:

Some notes of interest. If you are having memory issues (Java out of memory errors), you will likely want to look in Help->Show setup options to adjust your maximum allowed memory. If you are running into strange errors on a variety of projects, one of the first things to try is Edit->Reset preferences. The View menu has several options that are not immediately available on the button bar, such as showing TMA grid labels, filling in annotations (useful for distinguishing tumor/stroma regions), or adjusting what parts of cells are shown.

Button bar:

Orange: The ruler turns on and off the left hand set of tabs, called the analysis panel. This can be useful if you need more screen space for manual annotations. The crossed arrows is the Move tool (M key), and will be your default method of navigating the image. Left click within the Viewer and drag in most places to move the view.

An exception to this is if you have an unlocked annotation. In that case, you will select the annotation and drag it. On that note, BE CAREFUL WHEN EDITING UNLOCKED SIMPLE TISSUE DETECTIONS; accidentally dragging these around in 0.1.2 was, occasionally, heartbreaking. In 0.1.3+ the Edit->Undo function (CTRL+Z) can be very useful here. In 0.1.2, hopefully your data is saved (and saved often), and you can use CTRL+R to revert (also in File menu).

Red: These are some annotation drawing tools that I almost never use. The line tool can be useful for measuring distances. Their use is pretty straightforward. All initially created annotations start unlocked.

Green: These are the bread and butter of manual annotation. The Brush tool (B key) and Wand tool (W key) are both strongly linked to the current zoom (unless you turn this off in Preferences), so use that to your advantage. Both tools have one or more options in the Preferences menu that can be used to modify their effects. Holding the ALT key while drawing with either will act as an eraser. In 0.1.3+ CTRL+SHIFT+tool will draw up to, but not cross over into, existing annotations. The wand tool’s behavior is dependent on what is visible on screen. That includes color transformations in brightfield (looking at the Eosin “channel”), or turning on and off channels in multichannel images. Zooming out and only displaying a single channel or color transform can be a very easy way to quickly annotate large tumor areas.

I usually have M, B, and W, bound to various buttons on a multifunction gaming mouse.

Points tool: Create annotation object points manually. If you want to pick out objects that image analysis methods are not currently working for, or want to have a biologist/pathologist mark the correct “targets” for an image analyst to try and create an algorithm to find, this may be the tool for you.

Purple: A selection of tools to modify what is shown within the Viewer. The first is the Brightness and contrast dialog, which can be used to adjust the visibility of various channels or, surprise, surprise, the brightness and contrast of brightfield images. Many of these options can be controlled through the number keys, which is especially handy in low channel number (<10) multiplex images. The main trick to this dialog that might not be immediately obvious is that you can double click on the Min display or Max display to see and edit the exact value.

Magnification: The number immediately to the right of the Brightness and contrast button is the current magnification of the Viewer. Normally zoom would be controlled with the mouse wheel, but you can also double click this to manually edit the zoom. Useful when taking equivalent screenshots across multiple areas or images.

Fit to Viewer: The magnifying glass with a square zooms out (or in) until the entire image fits within the viewer.

Object buttons: These determine the visibility of various types of objects. The first turns annotations on or off, the second controls the TMA grid, the third detection outlines (tiles or cells), and the fourth fills in the tiles or cells. The fill has no effect if the detection outlines are not visible. The fill can be very useful when looking at classified detections, or when using Measure->Show measurement maps.

Note: Selected objects will always be visible regardless of these settings. 0.2.0m# has an option where this behavior can be toggled.

Slider: Opacity of all objects, including selected objects. Fear that nothing is working is often the result of having turned this all the way down and then forgotten about it.

Measurements: Quick access to dialogs that have various measurements listed for different object types.

Preferences: Where you can change many of the ways QuPath behaves, or how varous file types are treated, etc.

ImageJ: Quick access for sending regions to ImageJ.

New in 0.2.0m#

Green: Select tool. The brush, wand and other area tools can now be used to create temporary structures that select every sub-object completely contained within them.

Blue: Pixel classifier visibility.

Analysis Panel:

Project tab: As shown above, this is your list of images within your project, and is the primary way to create a new project, add images, or remove images in 0.1.2.

0.2.0m# has added significantly more functionality to this tab, including the ability to add metadata tags for a hierarchy like structure, selecting groups of images for removal, etc. Below shows adding a metadata tag for “Group” A and B, and then sorting by “Group” metadata.

Image tab: This tab is probably the most important to share when asking for help on the forum, as much of what goes wrong with images and image analysis stems from settings found here. Missing Pixel width and Pixel height metadata is one of the more common problems.

Annotations and Hierarchy tabs: Similar in many ways, as they both show a list of objects, and have a data table for the bottom half of their viewing area that shows the measurement list of a selected object. Annotations shows only annotations, and no order indicating whether any annotation is contained within another. It also has an editable list of available classes. You can create new ones by right clicking, or edit the color of a class by double clicking on it. Right clicking on existing classes also gives a variety of options including the ability to toggle a class’s visibility on/off. The Hierarchy tab lacks the class options, but uses the extra space to show what objects are contained within other objects. This list can become quite cumbersome to navigate with hundreds of thousands of detections (and slow down performance), so 0.1.3+ has options within Preferences to toggle off detections within the list.

The four buttons in the middle of the Annotation pane mostly control annotations. Delete and Delete all do exactly what they say, to either the selected annotation or all annotations. Set class takes both the currently selected annotations and the currently selected class (single left click), and assigns that class to the annotation. Auto set saves a couple of clicks by assigning newly created annotations to the selected class.

Workflow tab: Mostly used at the end of an analysis project, this tracks most of what you have done (not all functions are included), and can be used to generate a quick script including most steps. Generally you will want to take the total script and delete out all of the lines of duplicate attempts, then tidy up what remains of the actual analysis. I prefer to do this within the script, but you can start with Create workflow and edit the script through a graphical interface as well. Any individual function that will show up in the script can also be selected in the Command history in order to view what parameter values were used.


The main viewing area. Once you have “clicked into” this, you can use the drawing tools, the 1-0 keys to control what is visible, among other things. Some functions will not work unless the Viewer is the “active window.”

In the upper right you will see your slide overview thumbnail, and, usually, a red rectangle indicating where the viewing area currently is. You can click within this “map” to move the Viewer to a different location. In the lower left and lower right the scale bar and location plus pixel values can be found (unless turned off in the View menu).

Right clicking on a selected annotation within the viewing area brings up a context menu that includes a few important options, including the ability to lock or unlock the current annotation.

Multi-view: Regardless of whether you have an object selected, the context menu for the Viewer should always include the Multi-view option, which allows you to add rows and columns in order to view additional images at the same time. This can be very useful for sequential slice studies.

Off-topic but related, 0.2.0m# introduces the View->Mini viewers->Show channel viewer, which can allow simultaneous, but separate, visualization of all channels in an image at once.

This concludes your basic tour of the interface, and should give you enough information to get started! If you are familiar with creating projects, jump straight to the first steps in analysis, or learn more about starting a project and why it is important.

1 Like

Starting a project

Should you start a project?

Well, unless you want to quickly open an image and take a look at it, the answer is “almost always.” Some features will only work in projects, many scripts will only work in projects (as exported files and folders look specifically along the “project” file path). It will also allow you to quickly and uniformly apply your script/pipeline to a large set of images.
As of 0.2.0m3, some image types with multiple scenes or sub-files will only open in projects.

To start a project you will need an empty folder.

0.1.2+ - Use the Create project button in the Project tab, select your folder.

0.1.3+ - Drag an empty folder into the open QuPath window

Import images

I recommend placing the images you want to analyze within the project folder whenever possible. This makes the project as a whole more portable, as you can copy it onto the network, a USB drive, etc, and open it anywhere without disrupting the file path to the images.

0.1.2+ - Drag a single image into the QuPath window that has a project active. Or in the Project tab, click Import images which will allow you to navigate to the image location and select large numbers of images.

0.2.0m3+ - Select and drag large numbers of images into an open QuPath project. Requires a project to be open, follow the dialog instructions after.

Once you import an image, a project file (project.qpproj) will be generated in the project folder. Since I work with many projects, and in Windows, I find it useful to rename the project. Once it has been run (by double clicking it), it can be accessed more quickly by right clicking on the QuPath icon on the taskbar, which will then have it in the list of recently opened files.

It isn’t going to be quite so easy to figure out what all of those “project” projects were.

Project files themselves are the .qpproj file within the originally empty project folder. This is what you would run, or drag into an open QuPath window, or open through the file menu. The data files are stored in the “data” folder, and are named based on their association with an image file. They store all of the objects you see in the overlays (annotations, detections). If you are doing something destructive with your data, it is sometimes a good idea to back this folder up somewhere else so you can return to a previous step if you don’t like the results after a “Run for project.”

In 0.1.2/3 you will usually get a Scripts folder within the project if you save scripts in the default location (Automate->Project scripts). This function is broken for 0.2.0 through at least m4, and you will have to rely on setting a scripting directory for all of QuPath.

If you create a trained classifier, you will also get a “classifiers” folder.

Additional folders can be created (like the Images folder mentioned above) without interfering with the project itself, though I don’t recommend editing file names until you are comfortable with how the project works.

You now have a project and area ready to either learn more about how objects in QuPath work, or jump into your analysis.

Objects in QuPath

Images in QuPath are not modified, basically ever, but data objects of two general types are created, annotations, and detections. These objects are not “burned in” to the image, but are instead visualized as an overlay. The objects themselves contain data relevant to the pixels “below” them.

Annotations are intended as large, generally simple, structures and have measurements that update automatically as they are changed. This makes them slower to render and handle in terms of analysis, and having many annotations or highly detailed annotations (with a lot of vertices) can lead to massive slowdowns. Detections are generally smaller, simple objects that are not structurally modifiable, but the rigidity of their data means the program can support many more of them. These are general rules, and there are exceptions. Spots, for example, are annotations.

Overall, ~99% of projects will involve creating one or more Annotation objects, and then generating further Detection objects within them. Those Detections will then be classified or measured in some way in order to produce a general number (% positive), which can then be saved in the “parent” annotation object. A list of annotation measurements across a variety of images can then be generated for a summary at the end of a project. The simplest example would be a list of image names that each had a single annotation, with the percentage of positive cells as the primary statistic of interest.

Ready to get started? Or skip to the end and learn how to generate summary data for your project?

What is your image?

Multichannel- This category includes fluorescence, MIBI, or any other collection of grayscale channels combined together into a stack that can be represented in 2D. I believe QuPath can handle up to 32 bit depth channels.

Brightfield- This category handles 8bit RGB images. This is the industry standard for whole slide imaging, mostly due to the file sizes already being quite large. Some software/programs may default to a higher bit count, which QuPath will not currently handle (Our Zen software for Observers seems to default to 14bit RGB images, which will not open without conversion).

If Other: What is it? Start a new topic. Phase contrast images of cells probably won’t work very well. Possibly some Vectra brightfield images taken in more than RGB channels would fit here.


With any brightfield image, the first step (aside from verifying that the correct image type is selected in the Image tab) should be verifying that your color vectors are fairly accurate.

This can be done through the Analyze->Preprocessing->Estimate stain vectors, and it is usually a good idea when utilizing this to accept the mention setting background is usually a good idea to make seeing the color vectors easier. Select an area with an annotation (keep that annotation selected, as it should be after you draw it), and run the command.

Most commands take this background value into account when generating data, so do make sure to get a bit of background in the annotation you are using to set your vectors( the one exception to that is the Cell Detection command in 0.1.2).

Ideally, you will want the lines for your two stains (which do not have to be DAB and Hematoxylin, regardless of their labels) to match their angles with the pixels they represent. You can do this manually by dragging the balls at the end of the vectors, or click Auto in the lower left. Pete has more information about optimizing these here, along with other ways of selecting vectors. I highly recommend the use of the 1,2,3, and 4 keys while iterating through various color vector settings.

Now that you have some decent color vectors set, what type of image are you looking at?

Do you have a single field of view? Tissue in a whole slide or large tiled image? A Tissue MicroArray?

Single field of view

For single field of view images, you usually will want to analyze the entire image. In the Objects menu, there is an option to Create full image annotation, or in a script you can use


The true selects the object, so that whatever you run on the next line will have that object already selected. If you are running scripts multiple times on the same set of images, note that this does not clear all objects in the image in the same way that Simple Tissue Detection does. You may want to include a


line in order to prevent many overlapping sets of objects.

What is your analysis? Are you counting cells, measuring areas, or something more complex(advanced scripting)?

Whole slide image

For a whole slide image, or any image where a subset of the image is the tissue itself, or even when only part of the tissue is of interest, you can use Analyze->Preprocessing->Simple tissue detection to create your starting annotation(s).

Threshold: For brightfield, higher numbers mean more inclusion of more lightly stained tissue (255 would be everything including white). For a clean slide with good, near-white background, I find values of around 200-230 are usually what I am looking for. If I want to include adipose tissue or the tissue is very lightly stained, I will occasionally venture higher, into the 240s. If darker stained tissue is of interest, or the background is very dark (lacking white balance on the scanner), sometimes you need to go very low on your threshold.

In general, higher number = more area (lighter tissue) selected, lower number = darker tissue selected.

Requested pixel size: Default of 20 can be good for quickly verifying your other settings. I usually start here, and then drop it to 5 or lower only once I am interested in excluding small holes in the tissue (veins etc). Pure downsampling.

Minimum area: Anything below this area size is ignored. Increase the value to prevent additional bits of tissue from being included in your analysis. This process is applied prior to the “Single annotation” merging all objects. If you have trouble setting this, I recommend drawing some objects with the brush tool, and checking their size in the Annotations tab to get a feel for the size object you would want to exclude.

Max fill area: The most confusing measurement for me initially, but this essentially determines how small of a hole you want to allow within your tissue. This is highly dependent on the “Requested pixel size” and, again, I encourage the use of the brush tool to determine what value you want to put here.

Dropping the requested pixel size to 5 and using a Max fill area of 5000, I was able to exclude this whitespace from the total tissue area. Increasing the Max fill area to 35000 resulted in this area being included as part of the tissue. Applying this across your tissue will reduce the effective tissue size, and maybe give a more accurate count of cells per mm^2. Would likely have very little effect on %positive.

20um pixel size, no other options.

Dark background: Only for fluorescence type images.

Smooth image: 3x3 mean filter to downsampled image. Effect is dependent on requested pixel size.

Cleanup with median filter: Reduces variation using a median filter.

With smooth image and median filter.

Expand boundaries: Sometimes useful if you are having trouble selecting faint areas that are real, but you also have high background.

With expand boundaries

Smooth coordinates: Usually a good option so that the outline does not look so blocky.

Previous image with smooth coordinates

Exclude on boundary ignores objects that are partially cut off by the edge of the image.

Single annotation: Possibly the most important setting. When checked, all objects detected are part of the same single object. This is good if you want all of your measurements for a slide merged. If you have two tissue slices on the same slide, you may want to have separate annotations, and thus summary measurements, for each.

Summary- I tend to start with the defaults, and once my Threshold is acceptable, I drop the Requested pixel size, increase the Minimum area dramatically, and reduce the Max fill area dramatically.

What is your analysis? Are you counting cells, measuring areas, or something more complex(advanced scripting)?

Tissue microarrays

(to be expanded upon at a later date)

Command: TMA->TMA dearrayer

The basics of this are fairly straightforward, though getting exactly the results you want can be tricky.

TMA core diameter should be as close as possible to exactly correct. Small differences in this value can have a massive impact on the accuracy of the array. Use the line tool or the scalebar in the lower left corner to refine your estimate.

Faint tissue thresholding can be a problem, as can missing fields. If your tissue is too faint, you may find creating a predetermined array from a script to be more effective.

If your TMA detection is only slightly off, you can make use of the add row or add column functions within the TMA menu to adjust the dimensions of the array. Note that the new rows or columns default to being missing, or turned off. Missing cores are ignored when running functions like Simple tissue detection. You can select the whole array in the Hierarchy and then right click on it in the Viewer in order to set all cores to valid. Or run a quick script.


Tissue detection can then be run on all cores. Note that the TMA object can be adjusted (drag individual cores around, tweak the size) ONLY PRIOR TO RUNNING SIMPLE TISSUE DETECTION. Make sure you are happy with fine tuning any ellipse positions prior to running anything that locks the TMA grid into place.

File->TMA Viewer can be helpful for looking at any TMA results. There is also an option to create a TMA data grid in software like Excel, and then import that into the measurements list for each TMA core (say patient endpoints).

What is your analysis? Are you counting cells, measuring areas, or something more complex(advanced scripting)?

Counting cells

If counting cells, are you sure that is the best option? Cell counts can be one of the trickiest analyses to perform due to variations in cell morphology, occasional high background in staining, and the constant effect of cells that have nuclei in a different slice of the tissue. Worse, some cells have inherently difficult morphologies, like macrophages, and others, like CD8 positive T cells can easily obscure the nucleus. So while CD8 is technically not a nuclear marker, you end up with a population of cells that have a hematoxylin center with a CD8 marked rim, and also a high percentage of cells that are just a blob of CD8 marker with no nuclear marker. In many cases, you may be better off (and more biologically accurate) presenting your data as an area measurement.

That said, if you are sure you want to go for it… here are your options:

Cell detection (or Positive cell detection), fast cell counts, Manual spots

If none of these work, you still have the option of creating your own cell detection through ImageJ macros, though that would likely require some advanced scripting.

Cell Detection

Analyze->Cell Analysis->Cell detection

Positive cell detection and cell detection are the two main workhorses for counting and classifying cells in QuPath. They are nearly identical, with Positive cell detection having a few extra options that are specific to DAB measurements. Note: that does not mean that they can only be used with DAB, just that they only work with a second color vector called DAB. I suggest keeping your Image type as Brightfield (H-DAB) regardless of your actual stains if you want to use Positive Cell Detection.

Each method functions by finding a “nucleus” object from a modified version of the image that, hopefully, isolates nuclei. For example, only the hematoxylin stain within a brightfield image. Once that object is found, it will attempt to expand a cytoplasm around it. This cytoplasmic expansion is blind, and so the cytoplasm will always have the same shape as the nucleus, unless it runs into another cytoplasmic expansion. If there is another nucleus within cytoplasmic expansion range, the two cytoplasms will stop halfway between the two nuclei.

Options and measurements

Choose detection image: The default QuPath options are limited to Hematoxylin, or OD sum. Hematoxylin refers to whatever is in the Hematoxylin color vector (Image tab), and so it is, essentially, the “one color channel” option. You can adjust it to whatever color you want. OD sum is the total optical density of the image, and can be seen using the 5 key, or selecting Optical density sum in the Brightness and contrast dialog. Generally, use Hematoxylin when you have an effective nuclear stain that exists exclusively in 100% of your cells, and use OD sum when you have some sort of marker that may exist in your nucleus and obscure the hematoxylin (eg. KI67).

Requested pixel size: Downsampling. Snaps to the closest multiple of the pixel size (I believe). Higher values will run faster, lower values should give more precise outlines. Going below your pixel width/height is not useful.

Background radius and Max background intensity: These two options are linked, and are useful for preventing masses of cells from showing up in high background areas like black marker, smudges, and tissue folds. The first thing I tend to change if I am having difficulties is to remove the background radius measurement.

Standard settings over a tissue fold.

Reducing the Max background intensity prevents many of the cells around the fold from being generated.

Mean filter radius and Sigma: If your cells are being split incorrectly, increase these. If your cells are being merged together incorrectly, lower these. They are two slightly different ways of achieving roughly the same thing, but it may require some trial and error to establish the best results for your particular experiment. In general, Median filter radius is slower, so I tend to use it less. Increasing either or both of these too much will result in an empty halo of “extra nucleus” around any detected nuclei.

Minimum area: Minimum allowed area for the nucleus. Note that this is prior to the application of Smooth boundaries, so you will occasionally see nuclear areas below your threshold if that option is selected.

Maximum area: Same as minimum, with the same caveat.

Threshold: This is the big one! It determines how “high” the pixel values have to be to be considered a potential nucleus. Anything below this is ignored. To get an idea of what the detection algorithm is looking at, use the “2” key for Hematoxylin, or the “5” key for Optical density sum, and then mouse over the values in the resulting image. The lower right corner of the viewer will show the values of the pixel the cursor is currently over (in addition to the coordinates above it). In general, you will need a lower threshold than the positive pixels you find, due to the blur mixing the positive signal with whitespace.

Cursor, sadly, not shown.

Split by shape: Default checked, I have never found any need to uncheck this, but give it a shot if you have very oddly shaped nuclei that are being split into multiple cells. Doing so generally has a very negative effect on separating any tightly clustered nuclei.

Cell Expansion: This determines how far the nucleus (base object) will expand. Pixels within this cell expansion will contribute to the “Cytoplasm” measurements in the detection object. If this is set to zero, the resulting object will be a nucleus, and not a cell object. That prevents certain other methods from being run on it, like Subcellular detection or setCellIntensityClassifications(). I would recommend always using at least 1 pixel expansion. See this thread for other concerns when using Cytoplasm based measurements.

Include cell nucleus: Unchecking removes the cell nucleus. This can reduce video lag and the number of measurements for a cell. Cytoplasmic measurements are still included despite the lack of a nucleus.

Smooth boundaries: Generates smoother cell and nuclear edges. It appears that cell measurements are based off of these lines, so this is not simply a visual change.

Make measurements: You want these, right? No check, no data. This has been useful only in situations where I have a multiplex brightfield image, and the terrible, horrible, things I had to do to get appropriate cell segmentation made the initial measurements not terribly useful.

Once you have cell objects, you might want to classify them.

Positive cell detection only

Score compartment: a selection of 6 measurements to use for your threshold(s). I would recommend not using Max.

Choosing a threshold: As noted several other places on the forums, there is no such thing as a correct, objective threshold for positivity. Ideally, there would be some ground truth, but that frequently comes from our interpretation, the amount of background generated by a given antibody, and other biological concerns (PDL1 being strongly expressed on muscle cells that are not of interest to someone studying cancer). One possible way to choose is to have a good negative control. Another would be having a pathologist or someone familiar with your marker explaining what background is expected, and what constitutes positive in regards to your study. In the end, you will have to make some decision, and there are a couple of tools in QuPath to help with this.

Measure->Measurement maps: By adjusting the lower slider on the color scale for a given measurement, you can find a good threshold where all of your positive cells show up as red. Below is an example showing the lower (Maximum) threshold has been reduced to 0.09 for Cytopasmic Eosin staining, and the resulting cells that would be positive for such a measurement.

Breaking news! New in 0.2.0m3!
Left click on the bar to see a selection of visualization options. Rainbow is gone!

Measure->Show detection measurements->Show histograms: Sometimes, especially if you have a bimodal distribution, you can use the histogram for a particular measurement to help determine a particular cutoff. Alternatively, you can use this dialog to look at detection objects near your threshold in an already classified set of objects; this can help you decide if the “close calls” are being made correctly enough.

Note: don’t always require “nuclear” measurements to be nuclear (link to CD8 thread). As mentioned above, when using OD sum to generate your nuclei, you may also be picking up cytoplasm as part of the nucleus. That is okay, as long as you can still determine the positivity of your detection.

Sometimes you may need features that are not available by default, for example using other color deconvolution vectors, or measuring the angle of the cell. You can find additional information on adding features here.

If you still need a more complex classifier, check here, or go straight to converting this into a script you can run on the whole project.

Fast cell counts (brightfield)

A faster, less complicated option for finding cell objects. Creates a circle, not an outline of the cell.

One of the primary advantages of this method is that is has a merged Hematoxylin+DAB “channel.” That means, in cases where you have more than two stains (H-DAB+Red/purple/etc), you can do slightly better than pure Optical density sum, which would tend to pick up things like purple very strongly.

These objects are detections, not cells, so keep that in mind for any further analyses (in other words, options like subcellular detection, or setCellIntensityClassifications() will not work).

If you still need a more complex classifier, check here, or go straight to converting this into a script you can run on the whole project.

Points tool

Version note: in 0.1.3 and beyond, the number of points is included automatically in the Annotation measurements list. This might be a worthwhile reason to swap to a newer version than 0.1.2 if you are going to be creating points in a lot of images.

Manual spots can be placed for cases where counts of a small number of rare or complex objects are desired. The fact that they are annotations means that they can be dragged around after being placed. Despite any adjustments to their radius, they are actually only single points, so measurements are not particularly meaningful. Annotation spots can be classified by right clicking on one of a set, then selecting a class from the context menu. This changes the class for an entire set of points.

To create a new set, and apply a new class, click the “Add” button, which will create a new “points” object. Delete any points you don’t want by first selecting the points tool, selecting the set of points that contains that point, and then ALT+left clicking on it. Having any other tool active (like the move tool) will not work.

These spots can also be converted to detection objects (ellipses or other) using a script, and measurements can be generated within the ellipse detections (since they are no longer single point spots), or they can be reclassified.

Once you have annotated your slides with spots, you may still be interested in using a script to create a summary spreadsheet. Note that EACH SET of points is a separate annotation object, and thus will have its own line on any resulting spreadsheet. For this reason it might be a good idea to trythe script converting the spots into classified detections, so that all sets of points will go on a single line in the resulting document.

If this wasn’t for you, maybe choose a different cell detection method, or try an area measurement?

Measuring areas

Positive pixel count: Measuring one stain vs everything else. Your basic % area tool.

Create cytokeratin annotations: Similar to positive pixel count, except with a few additional options and generates annotations rather than detections. You can run cell detections within these. Even… different cell detections for the different areas!

Tiles: When you quickly want a lot of squares. Good starter analysis. Create detections or annotations.

SLIC superpixel segmentation: My personal favorite due to flexibility, but requires more coding to really get the most out of it. Create detections only… at first.

Subcellular detections: Very workaroundy, and likely to attract some Pete based aggro, but annotation objects can be turned into cell objects with a script, and the resulting area can be analyzed using subcellular detections.

Much of this will likely be obsolete once the Pixel classifier is fully functional. If none of this sounds right for your area measurements, you may want to look into 0.2.0m# for that feature. Be aware that it is not yet able to be run across a project, and has to be trained on each individual image. Another option is using the flexibility of ImageJ or scripting to perform your own, custom analysis.

Positive pixel count

There are several ways to measure areas, the most popular of which is Analyze->Region identification->Positive pixel count (experimental).

Basically, a selected annotation will be divided up into positive, negative, and neither pixels. Summary measurements will be added to the parent annotation. The positive pixel and negative pixels will be detection objects, while pixels that meet neither threshold will be left blank. Here you can see the red (positive) pixels, blue (negative/hematoxylin pixels), and some empty space that was below the hematoxylin threshold.

Resulting measurements include:

Positive % of stained pixels: Positive/(Positive+Negative)*100

Positive % of total ROI area: Positive/(Positive+Negative+Neither)*100

The first one ignores white space/areas below the hematoxylin threshold.

0.1.2 warning: In 0.1.2, each area analyzed by the tool required at least one “negative” pixel, or else the percentage positive would error out. This problem could usually be compensated for by placing a negative value in the Hematoxylin threshold (‘Negative’), but if your whole project will revolve around this feature, I would recommend figuring out the setup of version 0.1.3 or trying out 0.2.0m#.

Downsample factor: As normal, a pure increase in pixel size. Larger values will run faster, but be less accurate.

Gaussian sigma: Blur. Increase this for smoother annotations or to increase performance due to masses of different pixel objects on screen.

Hematoxylin threshold (‘Negative’): Essentially the threshold for tissue versus whitespace.

Eosin/DAB threshold (‘Positive’): Pixels above this threshold in the second deconvolved channel will be considered Positive.

IMPORTANT: I have often flipped my positive and negative vectors and thresholds due to one specific interaction between positive and negative. If a pixel is above both the negative and positive threshold, it is considered “positive.” In cases where there is dark background in another stain that is causing problems (Masson’s trichrome, H-DAB+Red, background from black shadows, etc), I have swapped my color vectors so that Hematoxylin has the color vector for my marker of interest. That way if there is something that is very dark in the other channel, it will be treated as “negative.” More details on that and dealing with the area issues mentioned next in this thread and the links contained within it.

Positive pixel count frequently struggles over large areas, and I strongly recommend creating annotation tiles (see tiles section) before running it, and then summing the results either in a script or in a spreadsheet program after.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

Cytokeratin tool (not exclusively for cytokeratin)

Analyze->Region identification->Create cytokeratin annotations (experimental): A fairly specific version of the positive pixel tool, but gives you annotations (already classified as Tumor and Stroma) rather than detections, so that you can run cell detection in one or the other. Does not need to be used on cytokeratin, any kind of broad background stain will work.

Downsample factor: As normal, a pure increase in pixel size. Larger values will run faster, but be less accurate.

Gaussian sigma: Blur. Increase this for smoother annotations or to increase performance due to masses of different pixel objects on screen.

Tissue threshold: Essentially the threshold for tissue versus whitespace.

Eosin/DAB threshold: Pixels above this threshold in the second deconvolved channel will be considered Positive.

Separation distance: This places a small separation between the tumor and stroma regions that are generated, which can help with half of a cell being represented in each annotation.

Just like the positive pixel tool, if you run into problems with this tool you may need to downsample and blur further, or create tile annotations first to break up the area analyzed at any one time.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?


Analyze->Region identification->Tiles & superpixels->Create tiles:

Overview: Takes an annotation area object and breaks it up into tiles.

Basic, but does the trick for simple analyses. One option is to create detections that can have measurements (Add intensity features), and then be classified. It can also be used to create annotations for various slow detection methods to work on a large scale, like positive pixel detection, sending regions to ImageJ, or creating image tiles for output for AI training/processing.

Tile size: How large each square is on a side.

Trim to ROI: Creates non-square tiles if the tile at any given position would otherwise extend outside of the parent annotation. Prevents including too much whitespace in the measurements of tiles on the edge of tissue.

Make annotation tiles: What it says. Annotation tiles can have other things generated inside of them, while detections cannot. Whether you want this depends on your final purpose. Don’t select it if you want to simply classify a bunch of tiles as positive or negative to quickly get a rough positive area measurement (or run a more complex classifier). Do check it for most other things.

Remove parent annotation: Project specific, but generally I would check this if Make annotation tiles is also checked. If you check only Make annotation tiles, and attempt to “selectAnnotations()” and run some sort of cell detection, as soon as the parent annotations is chosen, it will wipe out all Tile annotations within it. If you want tile annotations, but also want to end up with one large annotation at the end, I would check both of these, and then Objects->Merge selected annotations at some point later in the pipeline.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

SLICs (Superpixels. Big Pixels.)

Analyze->Region identification->Tiles & superpixels->SLIC superpixel segmentation:

Now we get to the good stuff. This option pairs well with another tool in the same menu:

Analyze->Region identification->Tiles & superpixels->Tile classifications to annotations.

Overview: Create detection tile objects that follow the outline of similarly stained pixels. This works in both brightfield and multichannel images.

Another tiling method, but this time you only have the option of starting with detections. They follow the outline of similarly stained pixels, and are a very flexible area classification tool. Once the SLICs are classified, similarly classified areas can be merged into annotations (see short guide here), which can then be used to search for cells or other objects. The most obvious example would be determining a “Tumor vs Stroma” area in something like an H&E, or in complex images such as those generated with MIBI. Decision tree or other trainable classifiers can be used on the large number of measurements that can be applied to these through Analyze->Calculate features->Add intensity features. More information on adding features here.

Getting SLICs that work for your project is too project specific to get into, but generally for large tissue based features I prefer larger, more regular tiles, while for cell detection or positive area detection I make them as small and irregular as I can tolerate (smaller and less regular = more processing time). Also, if you are using Haralick features to analyze texture, you will need either larger tiles, or you will want to use information from surrounding areas to help classify the tile.

The buttons and knobs

Gaussian sigma: Blur. Increase this for smoother tiles or to increase performance.

Superpixel spacing: Depending on your target and classification method, this can vary quite a bit. I usually start with 50 and then lower it as I want to get more precise with the edges of what I am measuring.

Number of iterations: I usually don’t change this, and haven’t noticed a major effect by increasing it. You may be able to speed up large areas by decreasing it, but always test on a small region first to see the impact.

Regularization: Can have a massive impact on the shape of the superpixels. The value seems to be very correlated with the size of the superpixel.
For example regularization 10.

Regularization 0.01, shown with heatmap of Eosin staining to emphasize the difference.

Measurement maps + SLICs can be used to accentuate many features of the tissue, even with relatively simple measurements, as shown.

Tissue segmentation and classification with SLICs has been a fairly standard way for me to start complex projects. Classification will require the addition of new feature measurements, of which there are a variety!

Functionality will be mostly superseded by the Pixel Classifier at some point during 0.2.0m#

If you want to measure areas classified as a certain object, you can either sum the detection area measurements within classes with a script, or convert classified detections into annotations, which have an automatic area measurement.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

1 Like

Subcellular detections

Analyze->Cell Analysis->Subcellular detection: Create an annotation, turn it into a cell object using a script, perform your subcellular detection. Can work better for small, oddly shaped areas. This option is primarily used for multichannel images, as stand-in for Positive pixel detection, which is brightfield only. It requires pixel size information (Image tab). This is definitely not the intended use for Subcellular detections, as it is more of a spot counter.
For the LuCa 7 channel sample (a field of view image), I would start by creating a full image annotation, then run the script linked to turn it into a cell object, followed by creating another whole image annotation (making data wrangling easier later on). So:

//Paste script in here

Ok, I was wrong, I do need to make one minor adjustment to the pasted script, as indicated in the script comments.
def targets = getObjects{return it.getLevel()!=1 && it.isAnnotation()}
needs to be changed to
def targets = getObjects{return it.isAnnotation()}

Now my hierarchy looks like this.
After playing with the settings a little, I ended up with this.

Note that this isn’t generally a good idea, and as you can see here, really didn’t do a great job of splitting some of the tumor cells. YMMV, but it is another option to try if other options aren’t working. It will take additional scripting to convert this into useful data, so expect to do a little coding if you go this route.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?