QuPath Intro: Choose your own analysis(adventure)

Positive pixel count, deprecated as of 0.2.0

There are several ways to measure areas, the most popular of which is Analyze->Region identification->Positive pixel count (experimental).

Basically, a selected annotation will be divided up into positive, negative, and neither pixels. Summary measurements will be added to the parent annotation. The positive pixel and negative pixels will be detection objects, while pixels that meet neither threshold will be left blank. Here you can see the red (positive) pixels, blue (negative/hematoxylin pixels), and some empty space that was below the hematoxylin threshold.

Resulting measurements include:

Positive % of stained pixels: Positive/(Positive+Negative)*100

Positive % of total ROI area: Positive/(Positive+Negative+Neither)*100

The first one ignores white space/areas below the hematoxylin threshold.

0.1.2 warning: In 0.1.2, each area analyzed by the tool required at least one “negative” pixel, or else the percentage positive would error out. This problem could usually be compensated for by placing a negative value in the Hematoxylin threshold (‘Negative’), but if your whole project will revolve around this feature, I would recommend figuring out the setup of version 0.1.3 or trying out 0.2.0m#.

Downsample factor: As normal, a pure increase in pixel size. Larger values will run faster, but be less accurate.

Gaussian sigma: Blur. Increase this for smoother annotations or to increase performance due to masses of different pixel objects on screen.

Hematoxylin threshold (‘Negative’): Essentially the threshold for tissue versus whitespace.

Eosin/DAB threshold (‘Positive’): Pixels above this threshold in the second deconvolved channel will be considered Positive.

IMPORTANT: I have often flipped my positive and negative vectors and thresholds due to one specific interaction between positive and negative. If a pixel is above both the negative and positive threshold, it is considered “positive.” In cases where there is dark background in another stain that is causing problems (Masson’s trichrome, H-DAB+Red, background from black shadows, etc), I have swapped my color vectors so that Hematoxylin has the color vector for my marker of interest. That way if there is something that is very dark in the other channel, it will be treated as “negative.” More details on that and dealing with the area issues mentioned next in this thread and the links contained within it.

Positive pixel count frequently struggles over large areas, and I strongly recommend creating annotation tiles (see tiles section) before running it, and then summing the results either in a script or in a spreadsheet program after.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

Cytokeratin tool No longer exists in 0.2.0, replaced by sequential uses of simple thresholder.

Cytokeratin tool (not exclusively for cytokeratin)

Analyze->Region identification->Create cytokeratin annotations (experimental): A fairly specific version of the positive pixel tool, but gives you annotations (already classified as Tumor and Stroma) rather than detections, so that you can run cell detection in one or the other. Does not need to be used on cytokeratin, any kind of broad background stain will work.

Downsample factor: As normal, a pure increase in pixel size. Larger values will run faster, but be less accurate.

Gaussian sigma: Blur. Increase this for smoother annotations or to increase performance due to masses of different pixel objects on screen.

Tissue threshold: Essentially the threshold for tissue versus whitespace.

Eosin/DAB threshold: Pixels above this threshold in the second deconvolved channel will be considered Positive.

Separation distance: This places a small separation between the tumor and stroma regions that are generated, which can help with half of a cell being represented in each annotation.

Just like the positive pixel tool, if you run into problems with this tool you may need to downsample and blur further, or create tile annotations first to break up the area analyzed at any one time.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

Tiles

*Analyze->Tiles & superpixels->Create tiles*:

Overview: Takes an annotation area object and breaks it up into tiles.

Basic, but does the trick for simple analyses. One option is to create detections that can have measurements (Add intensity features), and then be classified. It can also be used to create annotations for various slow detection methods to work on a large scale, like positive pixel detection, sending regions to ImageJ, or creating image tiles for output for AI training/processing.

Tile size: How large each square is on a side.

Trim to ROI: Creates non-square tiles if the tile at any given position would otherwise extend outside of the parent annotation. Prevents including too much whitespace in the measurements of tiles on the edge of tissue.

Make annotation tiles: What it says. Annotation tiles can have other things generated inside of them, while detections cannot. Whether you want this depends on your final purpose. Don’t select it if you want to simply classify a bunch of tiles as positive or negative to quickly get a rough positive area measurement (or run a more complex classifier). Do check it for most other things.

Remove parent annotation: Project specific, but generally I would check this if Make annotation tiles is also checked. If you check only Make annotation tiles, and attempt to “selectAnnotations()” and run some sort of cell detection, as soon as the parent annotations is chosen, it will wipe out all Tile annotations within it. If you want tile annotations, but also want to end up with one large annotation at the end, I would check both of these, and then Objects->Merge selected annotations at some point later in the pipeline.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

SLICs (Superpixels. Big Pixels.)

The best things in life are pixels

Analyze->Tiles & superpixels->SLIC superpixel segmentation:

Now we get to the good stuff. This option pairs well with another tool in the same menu:

Analyze->Tiles & superpixels->Tile classifications to annotations.

Overview: Create detection tile objects that follow the outline of similarly stained pixels. This works in both brightfield and multichannel images.

Another tiling method, but this time you only have the option of starting with detections. They follow the outline of similarly stained pixels, and are a very flexible area classification tool. Once the SLICs are classified, similarly classified areas can be merged into annotations (see short guide here), which can then be used to search for cells or other objects. The most obvious example would be determining a “Tumor vs Stroma” area in something like an H&E, or in complex images such as those generated with MIBI. Decision tree or other trainable classifiers can be used on the large number of measurements that can be applied to these through Analyze->Calculate features->Add intensity features. More information on adding features here.

Getting SLICs that work for your project is too project specific to get into, but generally for large tissue based features I prefer larger, more regular tiles, while for cell detection or positive area detection I make them as small and irregular as I can tolerate (smaller and less regular = more processing time). Also, if you are using Haralick features to analyze texture, you will need either larger tiles, or you will want to use information from surrounding areas to help classify the tile.

The buttons and knobs

Gaussian sigma: Blur. Increase this for smoother tiles or to increase performance.

Superpixel spacing: Depending on your target and classification method, this can vary quite a bit. I usually start with 50 and then lower it as I want to get more precise with the edges of what I am measuring.

Number of iterations: I usually don’t change this, and haven’t noticed a major effect by increasing it. You may be able to speed up large areas by decreasing it, but always test on a small region first to see the impact.

Regularization: Can have a massive impact on the shape of the superpixels. The value seems to be very correlated with the size of the superpixel.
For example regularization 10.


Regularization 0.01, shown with heatmap of Eosin staining to emphasize the difference.

Measurement maps + SLICs can be used to accentuate many features of the tissue, even with relatively simple measurements, as shown.

Tissue segmentation and classification with SLICs has been a fairly standard way for me to start complex projects. Classification will require the addition of new feature measurements, of which there are a variety!

Functionality has in large part been superseded by the pixel classifier, but there are still some things you can do with SLICs that simply can’t be done with the current pixel classifier. Chief among these is the combination of large scale features and small scale ones, and the use of all of the Haralick features to classify objects.

If you want to measure areas classified as a certain object, you can either sum the detection area measurements within classes with a script, or convert classified detections into annotations, which have an automatic area measurement.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

1 Like

Subcellular detections

Replaced by simple thresholder

Analyze->Cell Analysis->Subcellular detection: Create an annotation, turn it into a cell object using a script, perform your subcellular detection. Can work better for small, oddly shaped areas. This option is primarily used for multichannel images, as stand-in for Positive pixel detection, which is brightfield only. It requires pixel size information (Image tab). This is definitely not the intended use for Subcellular detections, as it is more of a spot counter.
For the LuCa 7 channel sample (a field of view image), I would start by creating a full image annotation, then run the script linked to turn it into a cell object, followed by creating another whole image annotation (making data wrangling easier later on). So:

createSelectAllObject(true);
//Paste script in here
createSelectAllObject(true);

Ok, I was wrong, I do need to make one minor adjustment to the pasted script, as indicated in the script comments.
def targets = getObjects{return it.getLevel()!=1 && it.isAnnotation()}
needs to be changed to
def targets = getObjects{return it.isAnnotation()}

Now my hierarchy looks like this.
image
After playing with the settings a little, I ended up with this.


Note that this isn’t generally a good idea, and as you can see here, really didn’t do a great job of splitting some of the tumor cells. YMMV, but it is another option to try if other options aren’t working. It will take additional scripting to convert this into useful data, so expect to do a little coding if you go this route.

Were you looking for something else for your area measurement? Or would you like to review a how to generate a simple script to summarize your measurements?

Multichannel

If you are here, your images should be something like a fluorescence, MIBI, CyTOF, etc. image that is composed of multiple grayscale, largely independent, channels. While QuPath doesn’t perform linear unmixing, exactly, some of the same functionality can be performed in a script to alter detection measurements that are generated. Some of the R^2 or colocalization scripts can be used to check for bleed-through.

If you have this sort of project, do you have a single field of view? Tissue of some sort?

1 Like

Single field of view

For single field of view images, you usually will want to analyze the entire image. In the Objects menu, there is an option to Create full image annotation, or in a script you can use

createSelectAllObject(true)

The true selects the object, so that whatever you run on the next line will have that object already selected. If you are running scripts multiple times on the same set of images, note that this does not clear all objects in the image in the same way that Simple Tissue Detection does. You may want to include a

clearAllObjects()

line in order to prevent many overlapping sets of objects.

What is your analysis? Are you counting cells, measuring areas, or something more complex (advanced scripting)?

Tissue detection in multichannel images

Try the simple thresholder here (except you will use “above” instead of below threshold for your classifier) if you have one dominant channel (DAPI with some Gaussian blur?), or use the pixel classifier for more complex projects and images.

Now obsolete as of the simple thresholder and pixel classifier

Much of this will likely be obsolete once the Pixel classifier is fully functional.

Simple tissue detection: Seems to only take into account the first channel of the image. If you can adjust your input such that this channel is DAPI, or some highly autofluorescent channel, you might be able to use Simple Tissue Detection in the same way as the brightfield instructions, only with Dark background checked.

Very likely this won’t be the case, so I recommend checking out this thread which contains both a script for, and a rough description of how to perform, tissue detection in a multichannel image.

The script is a great example of how to use ImageJ to handle some image analysis, and could be a good starting point to build off of for more complex ImageJ based analyses. I can go into more detail if there is enough interest.

Alternatively, you can fairly quickly draw areas using the brush tool or wand tool. The wand tool, in particular, can be very useful here as it takes into account what is visible on screen. That means you can turn off interfering channels, and/or enhance the brightness of useful channels, in order to make the tool more convenient to use.

What is your analysis? Are you counting cells, measuring areas, or something more complex (advanced scripting)?

Counting cells

Counting cells in multichannel images is far, far easier than in brightfield counterparts, as there is usually a well defined, unobstructed, nuclear channel that can be used as the core of cell objects. Still, you have several options.

Cell detection: The normal command
Manual spots: Annotation spot tool
Subcellular detection tool: Abuse the system. Also can be used as an area measurement to detect cell-like objects. TUNEL?
Manual creation: Build cells for confocal or other high magnification images.

Cell detection in IF

Reserved for expansion, but for now most of the measurements and their uses are the same as the brightfield cell detection description.

No longer true in 0.2.0

The only major changes are that Positive cell detection no longer works, and that your Choose detection image options consist of your various channels. If you have a fluorescent image in an RGB space, your channels 1 2 and 3 are R, G, and B. So channel 3 will most likely be your detection channel for nuclear markers such as DAPI/Hoechst.

I would definitely recommend looking at some of the concerns about cytoplasmic measurements described here if any of your channels are for cytoplasmic proteins. Membrane proteins that localize similar to HER2 are particularly problematic.

If you have multiple nuclear channels, and want to combine multiple cell detections, I recommend checking out this thread.

Definitely check out some of the classification options available!

Points tool

This currently work exactly the same in multichannel images as in brightfield.

Manual creation of cells-I have a short guide for using a detection channel to create nuclei, and then using the wand tool to paint the cytoplasms for very irregular cells at high zoom. This isn’t a great option if you have a lot of cells, but can work very well for a few cells per field at 63x.

https://groups.google.com/d/msg/qupath-users/ehxID096NV8/QUMpoXc_BwAJ

If you are choosing this method, you may still want to classify your cells,, generate some measurements for your cells, or you might want to skip to generating a measurement summary at the end of the scripting post.

Tiles

Chopping up an image into a checker board

Analyze->Tiles & superpixels->Create tiles

Overview: Takes an annotation area object and breaks it up into square tiles.

Basic, but does the trick for simple analyses. One option is to create detections that can have measurements (Add intensity features), and then be classified. It can also be used to create annotations for various slow detection methods to work on a large scale, like positive pixel detection, sending regions to ImageJ, or creating image tiles for output for AI training/processing.

Tile size: How large each square is on a side.

Trim to ROI: Creates non-square tiles if the tile at any given position would otherwise extend outside of the parent annotation. Prevents including too much whitespace in the measurements of tiles on the edge of tissue.

Make annotation tiles: What it says. Annotation tiles can have other things generated inside of them, while detections cannot. Whether you want this depends on your final purpose. Don’t select it if you want to simply classify a bunch of tiles as positive or negative to quickly get a rough positive area measurement (or run a more complex classifier). Do check it for most other thigns.

Remove parent annotation: Project specific, but generally I would check this ifMake annotation tiles is also checked. If you check only Make annotation tiles, and attempt to “selectAnnotations” and run some sort of cell detection, as soon as the parent annotations is chosen, it will wipe out all annotations within it. If you want tile annotations, but also want to end up with one large annotation at the end, I would check both of these, and then Objects->Annotations->Merge selected annotations at some point later in the pipeline.

Using tiles to generate a heatmap overlay:

You may want to add some features to your tiles, which would then let you perform a classification of some sort. Alternatively, if you have merged your tiles into new annotations by class, you may want to head back to look at cell counting options!

Subcellular detections

This method can mostly be replaced by the pixel thresholding tools in 0.2.0

Another place where things get fun! Turning a single field of view whole image annotation into a cell allows the use of the subcellular detection command to outline areas of cytoplasmic or other stains independent of cell expansion. You can use this, essentially, like the positive pixel tool, except that it will require a bit of coding.

  1. Create your whole image annotation.

  2. Convert it into a cell with a script (Be sure to read how to modify the script for this purpose).

  3. Run the subcellular detection on as many channels as you want, and adjust your thresholds until you are happy with the results. You can change the min and max spot size values to whatever you want, but make sure to keep the estimated spot size at 1. This makes the estimated spot count for a given channel equal to the area detected by the subcellular detection command.

  4. The “cell” object will now have an estimated spot count. If you have several fields of view that are all the same size, you are essentially done, as you can compare the areas that are positive in each channel between images.

  5. To get more complicated, you could then re-create the whole image annotation, add the estimated spot counts from the cell object to the annotation object (more scripting), and then run a cell detection afterwards. This will wipe out your whole image cell, but create a number of cells that could be used to normalize the area measurement. The final annotation will now have a measurement of the area covered by each channel of interest, and a count of the total number of cells.

  6. Summarize the results across the project.

Classification

0.2.0 classification

Classification has undergone some major changes in 0.2.0, most of which are covered here for normal single marker analyses:
https://qupath.readthedocs.io/en/latest/docs/tutorials/cell_classification.html
and here for more complex multiplex analyses:
https://qupath.readthedocs.io/en/latest/docs/tutorials/multiplex_analysis.html#create-a-classifier-for-each-marker

The primary thing I want to point out is that if you are doing a brightfield analysis, the stain vectors and background you have selected when you OPEN the dialog box are what will be used for thresholding. Making any changes to the background values, or tweaking your stain vectors absolutely requires that you close the dialog and reopen a new one.

0.1.2 entry, for reference

See the multiplex classifier thread for a lot of information on classification options that include:

Positive cell detection: Classifier and cell detection all rolled into one.

Classify->Classify by specific feature: Build your own classifier through a GUI. Very tedious for anything more than a simple classifier.

setCellIntensityClassifications(): one line script to handle simple classifications based on a single measurement. Terrible example that I have occasionally actually used in order to turn all cells “Positive” very quickly:
setCellIntensityClassifications("Nucleus: Area", 1)

Classify->Create detection classifier: Create a machine learning-based trained classifier (subsection Train a cell classifier based on annotations) by defining your own training sets and measurements. Possibly create new features for this purpose.

If/then based scripts: Create your own decision tree.

Multiplex classifier: A GUI based script that allows a set of measurements to be used to establish base classes, and then classifies all objects according to which sets of parameters they meet the thresholds for. There are a lot of resources available through that thread, but if there is any specific information you think should be included here, please message me to let me know.

UPDATE: Pete has a new multiplex classifier with its own workflow described here. With a trained object classifier, you will have additional flexibility for classifying cells as positive or negative for a given marker when compared with only using a single measurement.

Once you have a set of objects that are classified, if your process is automated you will probably want to run it across a selection of images and get some data at a project level. If you are having trouble with classifier accuracy, you might want to look into generating additional features that are more pertinent to your project.

Creating a script to run on the project

Creating a script for a project in 0.1.2

Most scripts will start in the Workflow tab, with the Create script button.

That will generate a list of commands that you have run, which you can then trim down to the ones you really need for your analysis. As long as you have an entirely automated process (no manual editing, putting down annotation Points, etc), creating a script is usually fairly straightforward. After running through some ideas for this section, I felt like a concrete example would be the best way to help understand the process.

The original TIFF image used in this post is LuCa-7color_[13860,52919]_1x1component_data.tif from Perkin Elmer, part of Bio-Formats sample data available under CC-BY 4.0.

*Stolen from Pete

Here is a list of commands I used while playing around with the LuCa multichannel image. You can see from the list that I tinkered with various watershed cell detection options, and jumped around a bit with what I was looking at. When I click Create script, I get something like this:

If I were to run this right now, it would attempt to replicate all of those commands, and I don’t really want to run 5 cell detections. I agree with my image being fluorescent, and the createSelectAllObject(true); line outlines my single field of view with an annotation object, so those are both good. In a brightfield whole slide image those lines might be replaced with something more like:

Since I have an annotation, and I know that it is selected, I then want to run my final cell detection command (assuming I was improving my cell detection each time!).

Now I have something like this:

Note: On Windows based systems, the file path needs to be written with double backslashes. In 0.1.2, you may need to edit this manually.

Looking at the rest, I don’t want to clearAllObjects() right in the middle of my script! So I will definitely remove that. I also have a random createSelectAllObject() at the end of the script that I don’t need. I would like to run the trained classifier, however, so I will keep that. I also want to see cluster measurements, so I will keep that line as well. Come to think of it, if I want to run this project multiple times, I will not want to keep adding objects on top of objects, so I should really place one of those lines near the top. Think of all of these lines as LEGO building blocks, and you want to add them all in the correct order. I also want to run my cluster analysis AFTER I have classified my cells! So I will move that to the bottom.

Now I get something a little bit more like I would want when I run the whole script:

However, I also want to extract some annotation level data from each of the images in my project!

If I go to Measurements:

And look at my annotation measurements (in 0.1.3), I get something like this:

If I were in 0.1.2, I would insert a script like this at the end of my current script in order to add percentage positive and cell density measurements per classification to the parent annotation.

For now, though, these measurements will do for the demo. Now we turn to some of Pete’s scripts from his blog. The first script we add to the end of any script we want to run for the project.

This script creates a new folder within the project folder to store a text file for the current image.

With the combined script, we can now Run for project


And select all of the files that we want. This only makes sense if your files are all similar :slight_smile:

Now I have a selection of text files in an “annotation results” folder. Well, that’s not really what I wanted, I wanted a single file where I can peruse all of my image data at once! Pete has also provided a script for this here.

Copy that into a new script window and run it, targeting the annotation results folder within your project.

If you never modify the first, single file, script that writes everything to the annotation results folder, you can make a one line change to prevent the popup and navigation. It will assume that all of your individual text files are in the default location within the project as defined in the script above.

Swap this line:

def dirResults = QuPathGUI.getSharedDialogHelper().promptForDirectory()

For this line:

def dirResults = new File(buildFilePath(PROJECT_BASE_DIR, 'annotation results'))

Now if you look in your annotation results folder, you should have a Combined_results.txt file that can be opened in pretty much any spreadsheet program!

WARNING: Run for project… does not appear to edit the image that is currently open, even if that image is one of the images included in the run. However, it IS editing that data file in the background. As such, QuPath will see that the data file has been edited (without showing you any of the changes on the screen) and will ask you if you want to save your data. The correct answer to this is usually NOOOOOOOOOOOOOOO. If you select yes, you will be overwriting the results of the Run for project… with the data currently visible on screen.

That’s it! You made it! Well. Maybe. I wrote most of this in a day, so I am certain there will be glaring errors and gaps (above and beyond the ones I am already aware of :frowning: !!) that may prevent this guide from being as useful as it could be. Feel free to start a topic or send me a message and I will try to clarify or fix up as much as I can. I have also toyed with the idea of keeping one consistent image throughout most of the steps, so that everyone could follow along with their own copy… but time, time, time.

I will definitely be adding a section on the Pixel classifier once that is fully functional, but for now I am avoiding that and the alignment tool, among others.

Good luck with all of your analysis!

1 Like

Advanced coding

Newest examples of code here on the new readthedocs site:
https://qupath.readthedocs.io/en/latest/docs/scripting/index.html

For more complex analyses, outside the purview of an introduction, you will probably want to do some scripting, above and beyond simple scripting. I have no plans for an advanced scripting guide until versions stabilize, but there are some code examples from Pete, in Pete’s blog, and a collection from various places on the forums, somewhat organized by type or what they accomplish.

UPDATE! New post here from Pete regarding a lot of scripting access to various parts of the QuPath program, like iterating through a project from a script or accessing metadata fields.

Miss something? Back to the top!

1 Like

Adding Features

The Calculate features menu has been split into the Calculate features and Spatial analysis menus in 0.2.0.

Add Intensity features

The largest and most flexible feature generator summarizes intensity data in or around a given detection. You can select a single object (cell) and run this, or everything, or an annotation. A simple use would be to create a full image annotation and collect the mean value of all channels/deconvolutions for all of your images, and take a look at the resulting spreadsheet to get an idea of the amount of variability in your project images. Of course, it gets a lot more complex than that. One downside is that it currently generates only whole cell measurements, not whole cell, cytoplasmic, and nuclear measurements. As with many things, a script can help with that, see near the bottom of the post.

The options

Preferred pixel size: Larger is lower resolution and faster, smaller is the opposite. This value is the most important when dealing with small objects, as you need a certain amount of pixels to be available or the creation of the feature fails. So if I am looking at ~1um^2 subcellular detections, and try to use a Preferred pixel size of 2um, QuPath will not throw any obvious errors, but I will also not get very many objects with measurements.

If you are missing measurements in some of your detections, try reducing this. If necessary down to the pixel size. Otherwise, you need to make your detections larger. You can test this by trying to create 2pixel by 2pixel tile detections, and trying to get features.

Regions: I avoided this for the longest time because I didn’t appreciate its use. ROI is the default, and it looks at exactly the pixels within the object. Square tiles, circular tiles, and the Tile diameter are where things get interesting, as you can learn more about the environment around your object without actually changing the object itself. There are quite a few uses for this, and probably many I haven’t thought of, but here are a couple. In 0.2.0 the option for adding measurements for the nuclei only is available, so you can combine the nuclear and full cell measurements to roughly calculate the mean intensity within the cytoplasm, if desired.

Gain contextual information about the object by including information about the surrounding tissue using Haralick features generated by larger area tiles.

Cells can be quite densely packed, and occasionally it can be useful to get an idea of the area directly around the nucleus, regardless of whether that area is considered part of the nucleus’ cell. A circular tile in the area below multiplied by the mean intensity, would give me a better idea of the amount of yellow channel marker in that cell (provided that cell type is sparse/non-touching).

Warning: I am pretty sure the tiles are centered on the selected object’s XY centroids. That means if you select the nucleus, the tile will be centered there. If you select the cell object, the center of the cell might not be the same as the center of the nucleus. This behavior could lead to unexpected results with columnar cells, for example.

Color transforms: What do you want data on? Everything selected below this will be run on ever channel selected here. If you want to micromanage what is generated for which channel (and this can be important when you have a million cells), run the command multiple times.

Basic features: Exactly what they say. I never use Min&Max, and generally if I want standard deviation I find a Haralick feature that more accurately defines what I am really looking for.

Haralick features: A nice collection of features you can read about elsewhere.
Section 2.4 https://www.hindawi.com/journals/ijbi/2015/267807/
http://haralick.org/journals/TexturalFeatures.pdf
I suspect most people will use these the same way I do. Compute them, then look at the results in Measure-> Show Measurement Maps and see which features/channels separate out your areas/objects of interest. Min and Max should be your “values of interest”, ie Min is your lower threshold (noise), and Max is frequently the maximum pixel value, though you might decrease this if dead cells or specks of something are brightly autofluorescent. If these values are different for every channel, you may want to run the channels separately.

I have never changed the Haralick distance from 1, and haven’t seen much of a difference from increasing the number of bins, though I have not tested it extensively.

Add smoothed features:

I have usually used this for SLIC classification before reclassifying and merging the SLICs into annotations. Usually there are some misclassified objects, and by either creating a classifier based on the smoothed features, or smoothing the results of a previous classifier (scripting), I can create more consistent annotations. Pete demonstrates a similar use for it in the Wiki, here.

Smooth within classes: Limits the smoothing to objects with the same class. I generally don’t use this since I am usually applying the features prior to determining the class.

Legacy features names: If you had an older project and want to keep names consistent. If you don’t know whether you need this, you probably don’t need this.

Add shape features

Since tiles and SLICs come without an area, this gives you the option to add those back in. You could also add perimeter and circularity to subcellular detections this way. This has been dramatically expanded in 0.2.0 to allow for the addition of further measurements to other methods of cell detection like StarDist.


Most of these features are fairly well understood, but I figured I would add in a couple of pictures on Solidity.
image
image
See more on convex hull here: https://en.wikipedia.org/wiki/Convex_hull

Add Delaunay cluster features

Updated before 0.2.0 to handle much larger numbers of objects. I find the results interesting , so I will often take a look at this with Limit edges to same class and Add cluster measurements both selected, and then use the Measurement maps to color by cluster size. Lines between objects can be turned off or on in the View menu.

See an example of the clustering in the LuCa image here: Cluster analysis

Issues with Delaunay clustering and some other options are mentioned here:
QuPath Script: Nearest Neighbors
Discussion and Script: What is a hotspot?

Distance to annotations 2D

There is no dialog for this, it runs when you click it. It will only pay attention to (generate values for) annotations with a class assigned. It will give each detection (regardless of what they are) a distance from the outer edge of an annotation. If you have a tissue slice, classify the annotation as “Tissue” and run this, every cell will have a distance of 0 because they are within “Tissue.” If you wanted the distance from the surface of the tissue, you have to think outside of the box. Or tissue, as it were. Objects->Annotations->Make inverse allows you to create an annotation that fills in the non-tissue part of the image. Label that “Tissue Edge.” If you run the script again, all of the cells will still get 0 distance to Tissue, but will get an accurate distance in microns (assuming metadata!) to Tissue Edge. If you create a tumor annotation within your tissue, you can use the function to find how far outside of the Tumor annotation the cells are. To find how far into the tumor certain cells are, you would need to label the non-tumor part of the tissue “Stroma,” and then the Distance to Stroma would be positive and non-zero for all cells within the tumor.

Detect centroid distances 2D

This specifically checks the distance of the center of any detection of a given class to the center of the nearest detection of any other class.

Example scripts to generate other measurements

General list of measurement based scripts here, with some selections highlighted below.

Multiple color deconvolutions : A script that lets you blast through several color deconvolutions at once (handy for isolating certain stains for a classifier), and has the added advantage of calculating both nuclear and cytoplasmic means. This is in contrast to the current (as of 0.2.0m3) Add intensity features which will only calculate values for the whole cell.

Feret values through ImageJ : Use ImageJ functions to find the Feret angle, diameter, etc. As written it only calculates the angle, but using a different array value (other than 1) can get other Feret measurements.

R^2 : Calculate R squared values between various measurements for your cells.

Colocalization : Find the colocalization between channels in various types of areas. Pearson and Manders. Do make sure you check the extra reading links so that you understand how they both work and should be used!

That’s about it for features for now, if you have anything specific in mind, start a new topic and ask away!

Since you can now generate features, these can be applied to cell detections, SLICs or Tiles, or annotations that you have already generated!


Probably the pixel classifier here


Citing QuPath

So far this has been a solo effort, so please do support the creator by citing the original QuPath Scientific Reports paper any time you use it to generate relevant data!
I suspect most of you know that this will help support further development of this project for all of us by making it easier for the author to get grants and collaborations to support his efforts. :slight_smile:

1 Like