Spatial Information Of Positive Pixel Count Detections

Hi!
I’m relatively new to using QuPath and I’m using it for analysis of Spinal Cord sections- I have analysed my slides using the positive pixel count function to pick up staining but now want to try and extract data about the position of the positive detections relative to the surrounding outer pial surface. Does anyone have any advice about how I would work out the distance of detections from a whole surface as opposed to a fixed co-ordinate in the image? Or any other ideas about how to go about this?

If you are using M10, and the outer object is an annotation, you can use Distance to annotations to find that per object.
image
It will be distance from the centroid of the positive pixel object to the nearest annotation point, not surface to surface.

Here I drew an ellipse on some cells, classified it as Other, and use the distance to annotations command.


And here I created an inverse annotation to the ellipse, classified it as OutsideIslet, and then ran the same command.

  1. Make sure the annotation object has a class.
  2. Distance is from the centroid of any object outside of the annotation to the nearest edge of the annotation. That is why I created the inverse annotation for the second step.

Thanks so much!

My project is currently in another earlier QuPath version how can I transfer the project to M10 whilst keeping the detections and annotations from the previous version?

Yes there is! If your version is not too old you should just be able to open the project as is.
If you have an older version (e.g. QuPath 0.1.2), there is a way to import images (with annotations/etc…). However, this will only be available in v0.2.0, which should be available in the next few days :slight_smile:

I’ll put it here anyway, as it might be out next time you read this! As mentioned here, if you use File > Project… > Import images from v0.1.2 to import images from an older version to v0.2.0, you can keep on using them as if nothing had changed!

1 Like

Small addition, you should remember that something has changed though - because things like cell detection might give (slightly) different results in different versions :slight_smile:

But if you mix your analysis using one version for some images and another version for other images it should be ok.

The main reason for the new importer from v0.1.2 is to salvage old annotations, which can be time-consuming to create. The second reason is that it can enable data generated in v0.1.2 to be queried using some of the new features. But in general, best to stick to the same version for everything if you can.

1 Like

Thanks all for your prompt and friendly advice- looking forward to trying it out with the new version! ! :smiley:

2 Likes

Hi All!
Have been trying your suggestions which have been great but I’ve run in to a different issue- the individual detection points (red points in screenshot slide) seem to be pooled in to a single overarching detection object for each tile analysed using the positive pixel count (highlighted in screenshot sidebar and yellow on slide). Therefore I only get distances from these overall objects to my desired annotation when running DistanceToAnnotations. Is there something I need to/can change to get the distance from the individual detection points as opposed to the overall detection? :smiley:

I’m not sure if there is an easy way to split detections, but you can certainly split annotations, so you might try using a script to convert the positive and negative detections to annotations, and then use Objects-> Split annotations to divide them up. Finally, convert them back into detections.

Scripts to Manipulate objects can be found within that subsection here, among others.
https://gist.github.com/Svidro
Not 100% sure they are all compatible with your version, so you may need to hunt around the forums for variants.

Updated the script list with a new script that should work in M11, and gives a bit more description as far as how to adjust it to suit various circumstances.

There isn’t currently - which is one of the many limitations of the positive pixel count in QuPath.

However, in v0.2.0-m11 you can apply Create simple thresholder to color-deconvolved channels, and choose ‘Create objects’ - and which point you can optionally split the objects that are created.

I haven’t had any time to apply this method in earnest myself and iron out any of the kinks, but this is the general approach that will be used to support this kind of application in the future.

2 Likes

Still getting used to that existing! And that will probably work for the cartilage post as well.

1 Like