Train to better detect cell boundary after Positive Cell Detection in QuPath

I have fluorescent images double labelled by CD45 and DAPI. The Positive Cell Detection of the QuPath has done a decent job to detect most cells based on DAPI nuclear stain, but it can’t distinguish the those side-by-side cell clusters (the cell in the middle of the attache image). I tried pixel classification with cell boundary annotated especially for those cell clusters, but it didn’t detect other single cells very well/pick up lots of background junk. So I wonder if there is a way to combined these two methods, say use the Positive Cell Detection function to detect cells first and then apply pixel classification to better detect cell boundary.

image

If that is a consistent problem, have you tried reducing the sigma?

Video tutorial

1 Like

There isn’t currently a built-in way to do it, although it’s something I’ve explored a bit (and talk about related things here).

For now, the best QuPath offers out-of-the-box is the ability to adjust the cell detection parameters, which I describe in detail in the videos - but this may not be enough for some images.

1 Like

Thanks. By decreasing the filter size and sigma did help to sharpen the cell boundary and distinguish the side-by-side cells better. There are still a couple of cells hard to separate.

I really like the function of pixel classification by training the neural networks to skip the annotated boundary. (Pete’s blog is helpful too. https://petebankhead.github.io/qupath/2019/11/02/fifth-milestone.html) However, when I tweaked the five-layer NN, it didn’t appear to be able to do better than the “Positive Cell Detection” function. I trained around 50 cells and 100 cell boundaries. Do you think if there is room to improve the NN pixel classification results or I should stick to the “Positive Cell Detection”. If you favor the NN pixel classification, what parameters do I need to adopt to improve my current classification from your experience.

2 Likes

Thank you Pete for sharing this timely and updated workshop video and info. Look forward to see further exciting development from the QuPath team.

1 Like

I think it’s a fairly difficult problem due to context, and the pixel classifier isn’t, at the moment, a deep learning pixel classifier. You show an example where a pixel classifier definitely could separate two cells, but in many cases, those same pixel values will not be the edges of cells, but the middle of cells where the tissue slice cut them differently (some cytoplasm and some nucleus).

The most useful measurements should be gradient detection (I think), depending on the channel, but as you make the pixel classifiers more complicated, they need much more data as well. I’m not sure there is anything like rotational regularization for the pixel classifier in the same way that it exists for training deep learning classifiers.

Your best bet for separating cells like that is at the image acquisition stage. More pixels, higher resolution. Even that won’t be perfect, though, as some of the nuclei will actually overlap. Your image may be 2D, but it represents a 3D structure (though very thin), that can include multiple cells overlapping one another.