Any way to edit/correct object classifier results upon finding a misclassified object?

Hello!

I am a PhD student working on a project that uses fluorescent in situ hybridization in the postmortem human brain, and am fairly new to QuPath (using version 0.2.3). I am currently trying to analyze images taken at 20x on an Olympus FV1200 laser scanning confocal microscope.

Here is my current workflow: 1) Do a maximum intensity projection on the z-stack in Fiji, export it as a tif and then import that tif into QuPath. 2) Adjust the brightness/contrast of each channel. 3) use the positive cell detection on my DAPI channel to automatically identify cells. 4) use those cells as annotations to train object classifiers on other channels (to detect + or - cells). 5) Apply the trained classifiers across the whole project and obtain the density of the + cells in each image.

Here is my issue: sometimes the positive cell detection or object classifier will make mistakes (e.g., detect 2 or 3 nuclei as 1 nuclei, or classify a cell as positive when it is in fact negative). Despite playing around with the parameters, there seems to be misclassifications I cannot fix, probably due to the degraded nature of postmortem human brain tissue and because of the high background signal.

My question: is there a way to manually change a classification when I notice a mistake? For example, can I manually switch an annotation from a “negative” label to a “positive” label? Can I fix positive cell detections that encompass more than one cell manually? Any ideas of how I can resolve this problem? It would not be feasible to annotate all of the images manually.

Thank you!!
Kelly

If you post some examples of what is going wrong and what you think the right answer should be, someone might be able to make more specific suggestions. My only comment right now is that there is likely no perfect answer - generally you attempt to get as many oversegmentation errors (splitting single cells into multiple) as undersegmentation (multiple cells treated as one) with a given set of settings. You may also consider segmenting a the tissue level first (tumor/stroma/something else?) and then applying different cell segmentation parameters to each section - this can be useful especially in areas with dense immune cell populations.

Also, sometimes 20x is not going to be enough to correctly resolve cells with current methods. Especially in lymphoid tissue (not sure about your samples), 40x was often necessary for “good enough” segmentation from my point of view. In other cases (pancreas, liver) 20x was plenty.

You can use Set Classification in the annotation tab to manually change a label. It is not generally a great idea to rely on this as no one could reproduce your experiment (given the same image and same script - no one could repeat your results if you then manually changed them). Not that that should stop you from making sure things are “right,” but you should indicate that you performed a manual adjustment step.

The incorrect cells will not be used to train a better classifier if altered in this way, though you could use a Points tool to identify incorrectly classified cells and improve the classifier in future iterations. This is why it is always best practice to have either separate classifier images or a separate classifier project. The classifiers folder can then be copied over to the “real” project for analysis.

You cannot easily manually adjust detection ROIs, unlike annotations. They are fast and QuPath can handle millions of them as a tradeoff.

@Research_Associate Thank you so much for your reply! Much appreciated.

I think that one of my main issues is undersegmentation in the cell detection. I have attached some screenshots showing examples of the issues. This is particularly problematic because when I then train the classifier it will say that one “cell” is positive for both of my markers of interest, which is not biologically possible.

Here are the settings I am using for cell detection:
Pixel size: 0.5
backgroundRadius: 4.0
medianRadius: 0.0
Sigma: 4.0
minArea: 30
maxArea: 400
threshold: 100
Split by shape = True

Do you have any thoughts on how to improve my settings? I have played around quite a bit, but I keep running into the same issues.

I know that the watershed process is part of the cell detection step, but is there a way to do it again or change the settings?

Thank you so much again,
Kelly

1 Like

A sigma of 4 is a 4 micron (or pixel) gaussian blur, that is going to make it very difficult to separate your cells. Notice how much space there is around your nuclei - a lower sigma will split more cells, and create tighter borders around your nuclei.

If you want to see the effect, create a small dummy pixel classifier (easiest way I have found to see this), and look at the Gaussian blur filters.


That can give you an idea of what the cell detection is “seeing” when performing your cell detection.

There is no built in way to further edit detections, but you can do most things if you dig into the scripting. You will have to dig deeeep though to try to further watershed your already segmented cells. You may find it easier to send parts of images to ImageJ/FIJI and get the resulting ROIs back for analysis.

1 Like

Since your cells appear to be circular, you may also want to look into StarDist for segmentation.
https://qupath.readthedocs.io/en/latest/docs/advanced/stardist.html
It will require a different version of QuPath, but it can be quite powerful for detecting and separating partially overlapping cells.

1 Like

This is incredibly helpful, thank you so much again!

So even once I fix up the DAPI, I am still having problems with classification due to high background (I think) or differing intensities (It doesn’t work well to image all of these samples at the same intensity).

Is there an optimal way to remove background specifically before training a classifier? Should I just reset the dynamic range to increase the minimum so I remove a good portion of the background?

Below is an example of one properly classified and one improperly classified cell (green outline). I haven’t adjusted brightness/contrast at all. The one on the top is properly classified, but the one on the bottom is not. I appreciate any recommendations!

Lastly, how many different images do you recommend training on for optimal classifier performance without overfitting? 1 image per subject?

Thank you so much!
Kelly

There is no good answer to this, as it depends on the quality of the data and the strength of the difference you are measuring. Training classifiers can be difficult in QuPath because you are limited to the values on the measurement table - classification has nothing to do with how the cell looks or its environment unless you create those values for the cell yourself.

Brightness and Contrast have no effect on the image or on any analysis, that is for your visualization only. QuPath does not directly do any manipulation of the image, though a recent plugin does allow some modification to create a new image.

If the strength of the signal is what you indicated in the image, and you only have one channel of interest, you probably should not be using a trained classifier at all. Mean or median intensity should allow you to pick up the signal - or else the background is likely too high and the experiment needs to be changed prior to imaging.

Classifiers are for when multiple measurements impact positivity, like if you need small nuclei AND positive green cells, or something like that.

https://qupath.readthedocs.io/en/latest/docs/tutorials/cell_classification.html#apply-intensity-classification
https://qupath.readthedocs.io/en/latest/docs/tutorials/multiplex_analysis.html#option-1-simple-thresholding

The second link also explains how to merge multiple classifiers to get double/triple/etc/etc positive cells.

1 Like

Thanks so much again for your very helpful response!

The issue I have with the intensity classifiers is that I have to set different thresholds for each subject, since they were imaged with different settings. Is it valid to change the threshold for each subject?

Would it be feasible to manipulate the dynamic range of the image in Fiji and then import that new image to qupath, and then train a classifier on that?

Thanks again,
Kelly