QuPath Intro: Choose your own analysis(adventure)

Counting cells

If counting cells, are you sure that is the best option? Cell counts can be one of the trickiest analyses to perform due to variations in cell morphology, occasional high background in staining, and the constant effect of cells that have nuclei in a different slice of the tissue. Worse, some cells have inherently difficult morphologies, like macrophages, and others, like CD8 positive T cells can easily obscure the nucleus. So while CD8 is technically not a nuclear marker, you end up with a population of cells that have a hematoxylin center with a CD8 marked rim, and also a high percentage of cells that are just a blob of CD8 marker with no nuclear marker. In many cases, you may be better off (and more biologically accurate) presenting your data as an area measurement.

That said, if you are sure you want to go for it… here are your options:

Cell detection (or Positive cell detection), fast cell counts, Manual spots

If none of these work, you still have the option of creating your own cell detection through ImageJ macros, though that would likely require some advanced scripting.

Cell Detection

Overview

Analyze->Cell Analysis->Cell detection

Positive cell detection and cell detection are the two main workhorses for counting and classifying cells in QuPath. They are nearly identical, with Positive cell detection having a few extra options that are specific to DAB measurements. Note: that does not mean that they can only be used with DAB, just that they only work with a second color vector called DAB. I suggest keeping your Image type as Brightfield (H-DAB) regardless of your actual stains if you want to use Positive Cell Detection.

Each method functions by finding a “nucleus” object from a modified version of the image that, hopefully, isolates nuclei. For example, only the hematoxylin stain within a brightfield image. Once that object is found, it will attempt to expand a cytoplasm around it. This cytoplasmic expansion is blind, and so the cytoplasm will always have the same shape as the nucleus, unless it runs into another cytoplasmic expansion. If there is another nucleus within cytoplasmic expansion range, the two cytoplasms will stop halfway between the two nuclei.

Options and measurements

The command interface

Choose detection image: The default QuPath options are limited to Hematoxylin, or OD sum. Hematoxylin refers to whatever is in the Hematoxylin color vector (Image tab), and so it is, essentially, the “one color channel” option. You can adjust it to whatever color you want. OD sum is the total optical density of the image, and can be seen using the 5 key, or selecting Optical density sum in the Brightness and contrast dialog. Generally, use Hematoxylin when you have an effective nuclear stain that exists exclusively in 100% of your cells, and use OD sum when you have some sort of marker that may exist in your nucleus and obscure the hematoxylin (eg. KI67).

Requested pixel size: Downsampling. Snaps to the closest multiple of the pixel size (I believe). Higher values will run faster, lower values should give more precise outlines. Going below your pixel width/height is not useful.

Background radius and Max background intensity: These two options are linked, and are useful for preventing masses of cells from showing up in high background areas like black marker, smudges, and tissue folds. The first thing I tend to change if I am having difficulties is to remove the background radius measurement.

Standard settings over a tissue fold.

Reducing the Max background intensity prevents many of the cells around the fold from being generated.

Mean filter radius and Sigma: If your cells are being split too often, increase these. If your cells are being merged together incorrectly, lower these. They are two slightly different ways of achieving roughly the same thing, but it may require some trial and error to establish the best results for your particular experiment. In general, Median filter radius is slower, so I tend to use it less. Increasing either or both of these too much will result in an empty halo of “extra nucleus” around any detected nuclei.

Minimum area: Minimum allowed area for the nucleus. Note that this is prior to the application of Smooth boundaries, so you will occasionally see nuclear areas below your threshold if that option is selected.

Maximum area: Same as minimum, with the same caveat.

Threshold: This is the big one! It determines how “high” the pixel values have to be to be considered a potential nucleus. Anything below this is ignored. To get an idea of what the detection algorithm is looking at, use the “2” key for Hematoxylin, or the “5” key for Optical density sum, and then mouse over the values in the resulting image. The lower right corner of the viewer will show the values of the pixel the cursor is currently over (in addition to the coordinates above it). In general, you will need a lower threshold than the positive pixels you find, due to the blur mixing the positive signal with whitespace.

Cursor, sadly, not shown.

Split by shape: Default checked, I have never found any need to uncheck this, but give it a shot if you have very oddly shaped nuclei that are being split into multiple cells. Doing so generally has a very negative effect on separating any tightly clustered nuclei.

Cell Expansion: This determines how far the nucleus (base object) will expand. Pixels within this cell expansion will contribute to the “Cytoplasm” measurements in the detection object. If this is set to zero, the resulting object will be a nucleus, and not a cell object. That prevents certain other methods from being run on it, like Subcellular detection or setCellIntensityClassifications(). I would recommend always using at least 1 pixel expansion. See this thread for other concerns when using Cytoplasm based measurements.

Include cell nucleus: Unchecking removes the cell nucleus. This can reduce video lag and the number of measurements for a cell. Cytoplasmic measurements are still included despite the lack of a nucleus.

Smooth boundaries: Generates smoother cell and nuclear edges. It appears that cell measurements are based off of these lines, so this is not simply a visual change.

Make measurements: You want these, right? No check, no data. This has been useful only in situations where I have a multiplex brightfield image, and the terrible, horrible, things I had to do to get appropriate cell segmentation made the initial measurements not terribly useful.

Once you have cell objects, you might want to classify them.

Positive cell detection only

Information specific to the positive version of the command

Score compartment: a selection of 6 measurements to use for your threshold(s). I would recommend not using Max. Ever. Positive cell detection now works for Fluorescent images as well, and behaves as a built in Classify->Object classification->Create single measurement classifier, though it allows for three different thresholds.

Choosing a threshold: As noted several other places on the forums, there is no such thing as a correct, objective threshold for positivity. Ideally, there would be some ground truth, but that frequently comes from our interpretation, the amount of background generated by a given antibody, and other biological concerns (PDL1 being strongly expressed on muscle cells that are not of interest to someone studying cancer). One possible way to choose is to have a good negative control. Another would be having a pathologist or someone familiar with your marker explaining what background is expected, and what constitutes positive in regards to your study. In the end, you will have to make some decision, and there are a couple of tools in QuPath to help with this.

Choosing thresholds for positivity

Measure->Measurement maps: By adjusting the lower slider on the color scale for a given measurement, you can find a good threshold where all of your positive cells show up as red. Below is an example showing the lower (Maximum) threshold has been reduced to 0.09 for Cytopasmic Eosin staining, and the resulting cells that would be positive for such a measurement.


As of 0.2.0 there are new color map options that are better suited to visualizing data sets fairly, and making them useful for people with colorblindness.
image
Combining these options with the filter on the measurement list should allow you to quickly visualize your measurements of interest.

Measure->Show detection measurements->Show histograms: Sometimes, especially if you have a bimodal distribution, you can use the histogram for a particular measurement to help determine a particular cutoff. Alternatively, you can use this dialog to look at detection objects near your threshold in an already classified set of objects; this can help you decide if the “close calls” are being made correctly enough.

Note: don’t always require “nuclear” measurements to be nuclear (link to CD8 thread). As mentioned above, when using OD sum to generate your nuclei, you may also be picking up cytoplasm as part of the nucleus. That is okay, as long as you can still determine the positivity of your detection.

Sometimes you may need features that are not available by default, for example using other color deconvolution vectors, or measuring the angle of the cell. You can find additional information on adding features here.

If you still need a more complex classifier, check here, or go straight to converting this into a script you can run on the whole project.

1 Like

Fast cell counts (brightfield)

A faster, less complicated option for finding cell objects. Creates a circle, not an outline of the cell.

One of the primary advantages of this method is that is has a merged Hematoxylin+DAB “channel.” That means, in cases where you have more than two stains (H-DAB+Red/purple/etc), you can do slightly better than pure Optical density sum, which would tend to pick up things like purple very strongly.


These objects are detections, not cells, so keep that in mind for any further analyses (in other words, options like subcellular detection, or setCellIntensityClassifications() will not work).

If you still need a more complex classifier, check here, or go straight to converting this into a script you can run on the whole project.

Points tool

Pointers on points

Manual spots can be placed for cases where counts of a small number of rare or complex objects are desired. The fact that they are annotations means that they can be dragged around after being placed. Despite any adjustments to their radius, they are actually only single points, so measurements are not particularly meaningful. Annotation spots can be classified by right clicking on one of a set, then selecting a class from the context menu. This changes the class for an entire set of points.

To create a new set, and apply a new class, click the “Add” button, which will create a new “points” object. Delete any points you don’t want by first selecting the points tool, selecting the set of points that contains that point, and then ALT+left clicking on it. Having any other tool active (like the move tool) will not work.

These spots can also be converted to detection objects (ellipses or other) using a script, and measurements can be generated within the ellipse detections (since they are no longer single point spots), or they can be reclassified.

Once you have annotated your slides with spots, you may still be interested in using a script to create a summary spreadsheet. Note that EACH SET of points is a separate annotation object, and thus will have its own line on any resulting spreadsheet. For this reason it might be a good idea to trythe script converting the spots into classified detections, so that all sets of points will go on a single line in the resulting document.

If this wasn’t for you, maybe choose a different cell detection method, or try an area measurement?

Measuring areas

Positive pixel count: Measuring one stain vs everything else. Your basic % area tool.

Removed in 0.2.0

Create cytokeratin annotations: Similar to positive pixel count, except with a few additional options and generates annotations rather than detections. You can run cell detections within these. Even… different cell detections for the different areas!

Tiles: When you quickly want a lot of squares. Good starter analysis. Create detections or annotations.

SLIC superpixel segmentation: My personal favorite due to flexibility, but requires more coding to really get the most out of it. Create detections only… at first.

Subcellular detections: Very workaroundy, and likely to attract some Pete based aggro, but annotation objects can be turned into cell objects with a script, and the resulting area can be analyzed using subcellular detections.

Much of this will likely be obsolete once the Pixel classifier is fully functional. If none of this sounds right for your area measurements, you may want to look into 0.2.0m# for that feature. Be aware that it is not yet able to be run across a project, and has to be trained on each individual image. Another option is using the flexibility of ImageJ or scripting to perform your own, custom analysis.

Positive pixel count, deprecated as of 0.2.0

There are several ways to measure areas, the most popular of which is Analyze->Region identification->Positive pixel count (experimental).

Basically, a selected annotation will be divided up into positive, negative, and neither pixels. Summary measurements will be added to the parent annotation. The positive pixel and negative pixels will be detection objects, while pixels that meet neither threshold will be left blank. Here you can see the red (positive) pixels, blue (negative/hematoxylin pixels), and some empty space that was below the hematoxylin threshold.

Resulting measurements include:

Positive % of stained pixels: Positive/(Positive+Negative)*100

Positive % of total ROI area: Positive/(Positive+Negative+Neither)*100

The first one ignores white space/areas below the hematoxylin threshold.

0.1.2 warning: In 0.1.2, each area analyzed by the tool required at least one “negative” pixel, or else the percentage positive would error out. This problem could usually be compensated for by placing a negative value in the Hematoxylin threshold (‘Negative’), but if your whole project will revolve around this feature, I would recommend figuring out the setup of version 0.1.3 or trying out 0.2.0m#.

Downsample factor: As normal, a pure increase in pixel size. Larger values will run faster, but be less accurate.

Gaussian sigma: Blur. Increase this for smoother annotations or to increase performance due to masses of different pixel objects on screen.

Hematoxylin threshold (‘Negative’): Essentially the threshold for tissue versus whitespace.

Eosin/DAB threshold (‘Positive’): Pixels above this threshold in the second deconvolved channel will be considered Positive.

IMPORTANT: I have often flipped my positive and negative vectors and thresholds due to one specific interaction between positive and negative. If a pixel is above both the negative and positive threshold, it is considered “positive.” In cases where there is dark background in another stain that is causing problems (Masson’s trichrome, H-DAB+Red, background from black shadows, etc), I have swapped my color vectors so that Hematoxylin has the color vector for my marker of interest. That way if there is something that is very dark in the other channel, it will be treated as “negative.” More details on that and dealing with the area issues mentioned next in this thread and the links contained within it.

Positive pixel count frequently struggles over large areas, and I strongly recommend creating annotation tiles (see tiles section) before running it, and then summing the results either in a script or in a spreadsheet program after.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

Cytokeratin tool No longer exists in 0.2.0, replaced by sequential uses of simple thresholder.

Cytokeratin tool (not exclusively for cytokeratin)

Analyze->Region identification->Create cytokeratin annotations (experimental): A fairly specific version of the positive pixel tool, but gives you annotations (already classified as Tumor and Stroma) rather than detections, so that you can run cell detection in one or the other. Does not need to be used on cytokeratin, any kind of broad background stain will work.

Downsample factor: As normal, a pure increase in pixel size. Larger values will run faster, but be less accurate.

Gaussian sigma: Blur. Increase this for smoother annotations or to increase performance due to masses of different pixel objects on screen.

Tissue threshold: Essentially the threshold for tissue versus whitespace.

Eosin/DAB threshold: Pixels above this threshold in the second deconvolved channel will be considered Positive.

Separation distance: This places a small separation between the tumor and stroma regions that are generated, which can help with half of a cell being represented in each annotation.

Just like the positive pixel tool, if you run into problems with this tool you may need to downsample and blur further, or create tile annotations first to break up the area analyzed at any one time.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

Tiles

*Analyze->Tiles & superpixels->Create tiles*:

Overview: Takes an annotation area object and breaks it up into tiles.

Basic, but does the trick for simple analyses. One option is to create detections that can have measurements (Add intensity features), and then be classified. It can also be used to create annotations for various slow detection methods to work on a large scale, like positive pixel detection, sending regions to ImageJ, or creating image tiles for output for AI training/processing.

Tile size: How large each square is on a side.

Trim to ROI: Creates non-square tiles if the tile at any given position would otherwise extend outside of the parent annotation. Prevents including too much whitespace in the measurements of tiles on the edge of tissue.

Make annotation tiles: What it says. Annotation tiles can have other things generated inside of them, while detections cannot. Whether you want this depends on your final purpose. Don’t select it if you want to simply classify a bunch of tiles as positive or negative to quickly get a rough positive area measurement (or run a more complex classifier). Do check it for most other things.

Remove parent annotation: Project specific, but generally I would check this if Make annotation tiles is also checked. If you check only Make annotation tiles, and attempt to “selectAnnotations()” and run some sort of cell detection, as soon as the parent annotations is chosen, it will wipe out all Tile annotations within it. If you want tile annotations, but also want to end up with one large annotation at the end, I would check both of these, and then Objects->Merge selected annotations at some point later in the pipeline.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

SLICs (Superpixels. Big Pixels.)

The best things in life are pixels

Analyze->Tiles & superpixels->SLIC superpixel segmentation:

Now we get to the good stuff. This option pairs well with another tool in the same menu:

Analyze->Tiles & superpixels->Tile classifications to annotations.

Overview: Create detection tile objects that follow the outline of similarly stained pixels. This works in both brightfield and multichannel images.

Another tiling method, but this time you only have the option of starting with detections. They follow the outline of similarly stained pixels, and are a very flexible area classification tool. Once the SLICs are classified, similarly classified areas can be merged into annotations (see short guide here), which can then be used to search for cells or other objects. The most obvious example would be determining a “Tumor vs Stroma” area in something like an H&E, or in complex images such as those generated with MIBI. Decision tree or other trainable classifiers can be used on the large number of measurements that can be applied to these through Analyze->Calculate features->Add intensity features. More information on adding features here.

Getting SLICs that work for your project is too project specific to get into, but generally for large tissue based features I prefer larger, more regular tiles, while for cell detection or positive area detection I make them as small and irregular as I can tolerate (smaller and less regular = more processing time). Also, if you are using Haralick features to analyze texture, you will need either larger tiles, or you will want to use information from surrounding areas to help classify the tile.

The buttons and knobs

Gaussian sigma: Blur. Increase this for smoother tiles or to increase performance.

Superpixel spacing: Depending on your target and classification method, this can vary quite a bit. I usually start with 50 and then lower it as I want to get more precise with the edges of what I am measuring.

Number of iterations: I usually don’t change this, and haven’t noticed a major effect by increasing it. You may be able to speed up large areas by decreasing it, but always test on a small region first to see the impact.

Regularization: Can have a massive impact on the shape of the superpixels. The value seems to be very correlated with the size of the superpixel.
For example regularization 10.


Regularization 0.01, shown with heatmap of Eosin staining to emphasize the difference.

Measurement maps + SLICs can be used to accentuate many features of the tissue, even with relatively simple measurements, as shown.

Tissue segmentation and classification with SLICs has been a fairly standard way for me to start complex projects. Classification will require the addition of new feature measurements, of which there are a variety!

Functionality has in large part been superseded by the pixel classifier, but there are still some things you can do with SLICs that simply can’t be done with the current pixel classifier. Chief among these is the combination of large scale features and small scale ones, and the use of all of the Haralick features to classify objects.

If you want to measure areas classified as a certain object, you can either sum the detection area measurements within classes with a script, or convert classified detections into annotations, which have an automatic area measurement.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

1 Like

Subcellular detections

Replaced by simple thresholder

Analyze->Cell Analysis->Subcellular detection: Create an annotation, turn it into a cell object using a script, perform your subcellular detection. Can work better for small, oddly shaped areas. This option is primarily used for multichannel images, as stand-in for Positive pixel detection, which is brightfield only. It requires pixel size information (Image tab). This is definitely not the intended use for Subcellular detections, as it is more of a spot counter.
For the LuCa 7 channel sample (a field of view image), I would start by creating a full image annotation, then run the script linked to turn it into a cell object, followed by creating another whole image annotation (making data wrangling easier later on). So:

createSelectAllObject(true);
//Paste script in here
createSelectAllObject(true);

Ok, I was wrong, I do need to make one minor adjustment to the pasted script, as indicated in the script comments.
def targets = getObjects{return it.getLevel()!=1 && it.isAnnotation()}
needs to be changed to
def targets = getObjects{return it.isAnnotation()}

Now my hierarchy looks like this.
image
After playing with the settings a little, I ended up with this.


Note that this isn’t generally a good idea, and as you can see here, really didn’t do a great job of splitting some of the tumor cells. YMMV, but it is another option to try if other options aren’t working. It will take additional scripting to convert this into useful data, so expect to do a little coding if you go this route.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

Multichannel

If you are here, your images should be something like a fluorescence, MIBI, CyTOF, etc. image that is composed of multiple grayscale, largely independent, channels. While QuPath doesn’t perform linear unmixing, exactly, some of the same functionality can be performed in a script to alter detection measurements that are generated. Some of the R^2 or colocalization scripts can be used to check for bleed-through.

If you have this sort of project, do you have a single field of view? Tissue of some sort?

1 Like

Single field of view

For single field of view images, you usually will want to analyze the entire image. In the Objects menu, there is an option to Create full image annotation, or in a script you can use

createSelectAllObject(true)

The true selects the object, so that whatever you run on the next line will have that object already selected. If you are running scripts multiple times on the same set of images, note that this does not clear all objects in the image in the same way that Simple Tissue Detection does. You may want to include a

clearAllObjects()

line in order to prevent many overlapping sets of objects.

What is your analysis? Are you counting cells, measuring areas, or something more complex (advanced scripting)?

Tissue detection in multichannel images

Try the simple thresholder here (except you will use “above” instead of below threshold for your classifier) if you have one dominant channel (DAPI with some Gaussian blur?), or use the pixel classifier for more complex projects and images.

Now obsolete as of the simple thresholder and pixel classifier

Much of this will likely be obsolete once the Pixel classifier is fully functional.

Simple tissue detection: Seems to only take into account the first channel of the image. If you can adjust your input such that this channel is DAPI, or some highly autofluorescent channel, you might be able to use Simple Tissue Detection in the same way as the brightfield instructions, only with Dark background checked.

Very likely this won’t be the case, so I recommend checking out this thread which contains both a script for, and a rough description of how to perform, tissue detection in a multichannel image.

The script is a great example of how to use ImageJ to handle some image analysis, and could be a good starting point to build off of for more complex ImageJ based analyses. I can go into more detail if there is enough interest.

Alternatively, you can fairly quickly draw areas using the brush tool or wand tool. The wand tool, in particular, can be very useful here as it takes into account what is visible on screen. That means you can turn off interfering channels, and/or enhance the brightness of useful channels, in order to make the tool more convenient to use.

What is your analysis? Are you counting cells, measuring areas, or something more complex (advanced scripting)?

Counting cells

Counting cells in multichannel images is far, far easier than in brightfield counterparts, as there is usually a well defined, unobstructed, nuclear channel that can be used as the core of cell objects. Still, you have several options.

Cell detection: The normal command
Manual spots: Annotation spot tool
Subcellular detection tool: Abuse the system. Also can be used as an area measurement to detect cell-like objects. TUNEL?
Manual creation: Build cells for confocal or other high magnification images.

Cell detection in IF

Reserved for expansion, but for now most of the measurements and their uses are the same as the brightfield cell detection description.

No longer true in 0.2.0

The only major changes are that Positive cell detection no longer works, and that your Choose detection image options consist of your various channels. If you have a fluorescent image in an RGB space, your channels 1 2 and 3 are R, G, and B. So channel 3 will most likely be your detection channel for nuclear markers such as DAPI/Hoechst.

I would definitely recommend looking at some of the concerns about cytoplasmic measurements described here if any of your channels are for cytoplasmic proteins. Membrane proteins that localize similar to HER2 are particularly problematic.

If you have multiple nuclear channels, and want to combine multiple cell detections, I recommend checking out this thread.

Definitely check out some of the classification options available!

Points tool

This currently work exactly the same in multichannel images as in brightfield.

Manual creation of cells-I have a short guide for using a detection channel to create nuclei, and then using the wand tool to paint the cytoplasms for very irregular cells at high zoom. This isn’t a great option if you have a lot of cells, but can work very well for a few cells per field at 63x.

https://groups.google.com/d/msg/qupath-users/ehxID096NV8/QUMpoXc_BwAJ

If you are choosing this method, you may still want to classify your cells,, generate some measurements for your cells, or you might want to skip to generating a measurement summary at the end of the scripting post.

Tiles

Chopping up an image into a checker board

Analyze->Tiles & superpixels->Create tiles

Overview: Takes an annotation area object and breaks it up into square tiles.

Basic, but does the trick for simple analyses. One option is to create detections that can have measurements (Add intensity features), and then be classified. It can also be used to create annotations for various slow detection methods to work on a large scale, like positive pixel detection, sending regions to ImageJ, or creating image tiles for output for AI training/processing.

Tile size: How large each square is on a side.

Trim to ROI: Creates non-square tiles if the tile at any given position would otherwise extend outside of the parent annotation. Prevents including too much whitespace in the measurements of tiles on the edge of tissue.

Make annotation tiles: What it says. Annotation tiles can have other things generated inside of them, while detections cannot. Whether you want this depends on your final purpose. Don’t select it if you want to simply classify a bunch of tiles as positive or negative to quickly get a rough positive area measurement (or run a more complex classifier). Do check it for most other thigns.

Remove parent annotation: Project specific, but generally I would check this ifMake annotation tiles is also checked. If you check only Make annotation tiles, and attempt to “selectAnnotations” and run some sort of cell detection, as soon as the parent annotations is chosen, it will wipe out all annotations within it. If you want tile annotations, but also want to end up with one large annotation at the end, I would check both of these, and then Objects->Annotations->Merge selected annotations at some point later in the pipeline.

Using tiles to generate a heatmap overlay:

You may want to add some features to your tiles, which would then let you perform a classification of some sort. Alternatively, if you have merged your tiles into new annotations by class, you may want to head back to look at cell counting options!

Subcellular detections

This method can mostly be replaced by the pixel thresholding tools in 0.2.0

Another place where things get fun! Turning a single field of view whole image annotation into a cell allows the use of the subcellular detection command to outline areas of cytoplasmic or other stains independent of cell expansion. You can use this, essentially, like the positive pixel tool, except that it will require a bit of coding.

  1. Create your whole image annotation.

  2. Convert it into a cell with a script (Be sure to read how to modify the script for this purpose).

  3. Run the subcellular detection on as many channels as you want, and adjust your thresholds until you are happy with the results. You can change the min and max spot size values to whatever you want, but make sure to keep the estimated spot size at 1. This makes the estimated spot count for a given channel equal to the area detected by the subcellular detection command.

  4. The “cell” object will now have an estimated spot count. If you have several fields of view that are all the same size, you are essentially done, as you can compare the areas that are positive in each channel between images.

  5. To get more complicated, you could then re-create the whole image annotation, add the estimated spot counts from the cell object to the annotation object (more scripting), and then run a cell detection afterwards. This will wipe out your whole image cell, but create a number of cells that could be used to normalize the area measurement. The final annotation will now have a measurement of the area covered by each channel of interest, and a count of the total number of cells.

  6. Summarize the results across the project.

Classification

0.2.0 classification

Classification has undergone some major changes in 0.2.0, most of which are covered here for normal single marker analyses:
https://qupath.readthedocs.io/en/latest/docs/tutorials/cell_classification.html
and here for more complex multiplex analyses:
https://qupath.readthedocs.io/en/latest/docs/tutorials/multiplex_analysis.html#create-a-classifier-for-each-marker

The primary thing I want to point out is that if you are doing a brightfield analysis, the stain vectors and background you have selected when you OPEN the dialog box are what will be used for thresholding. Making any changes to the background values, or tweaking your stain vectors absolutely requires that you close the dialog and reopen a new one.

0.1.2 entry, for reference

See the multiplex classifier thread for a lot of information on classification options that include:

Positive cell detection: Classifier and cell detection all rolled into one.

Classify->Classify by specific feature: Build your own classifier through a GUI. Very tedious for anything more than a simple classifier.

setCellIntensityClassifications(): one line script to handle simple classifications based on a single measurement. Terrible example that I have occasionally actually used in order to turn all cells “Positive” very quickly:
setCellIntensityClassifications("Nucleus: Area", 1)

Classify->Create detection classifier: Create a machine learning-based trained classifier (subsection Train a cell classifier based on annotations) by defining your own training sets and measurements. Possibly create new features for this purpose.

If/then based scripts: Create your own decision tree.

Multiplex classifier: A GUI based script that allows a set of measurements to be used to establish base classes, and then classifies all objects according to which sets of parameters they meet the thresholds for. There are a lot of resources available through that thread, but if there is any specific information you think should be included here, please message me to let me know.

UPDATE: Pete has a new multiplex classifier with its own workflow described here. With a trained object classifier, you will have additional flexibility for classifying cells as positive or negative for a given marker when compared with only using a single measurement.

Once you have a set of objects that are classified, if your process is automated you will probably want to run it across a selection of images and get some data at a project level. If you are having trouble with classifier accuracy, you might want to look into generating additional features that are more pertinent to your project.