Vascular pattern recognition and its correlation with immune infiltrate

Dear QuPath community,
I’m Federica (medical student) from Italy and I am new to QuPath and image analysis and do not even know how to do coding. I learned QuPath from the tutorial videos and by reading this forum. I thank you in advanced for any help!

My research work involves the analysis of two different types of vascular patterns and the subsequent superimposition of a slide containing immune cells to understand the relationship between vascular pattern and immune filtrate.
On the slide containing the vascular pattern I’ve trained a pixel classifier in order to recognize the two different vascular patterns and then I’ve exported it as a binary mask.
On another slide containing the immune cells I’ve applied a Positive Cell Detection that identify the immune cells I’m interested.

I’m trying to figure out if there is a way to show me the coordinates of annotations created with the pixel classifier so that I can overlay the coordinates found with positive cell detection and finally identify the proportion between a specific vascular pattern and the immune infiltrate.

I thank you in advanced for any answers and I’m available for any further information.
Best regards,
Federica

1 Like

Hi Federica,
There’s a few ways to do this, depending on your exact set up and comfort level with programming.

The simplest way is to directly copy-paste the annotation from the vascular image onto the immune cell image. Open the vascular image, select one of the vascular annotations, then open the immune image, and go to Objects > Annotations > Transfer last annotation. This will transfer the annotation into the new image, but it will almost certainly be misaligned. You can then align it by hand as best as possible. This will ONLY work well if the two images come from 1 slide that was cyclically stained and scanned or a very close sequential section. It also requires a lot of manual work and therefore doesn’t scale well to many slides. But, it’s the only solution I know of that doesn’t require any coding.

If the slides show too much difference for this to be acceptable, or if you have too many images to align them one-by-one, there are some scripts that can help. This post has a way to align the images first using affine transformations and then creates one combined image:

From there, you would run the pixel classifier and the cell detection directly on the combined image. I’ve never used this tool, but the author is on this forum all the time and is very helpful. It looks like you don’t need to code anything yourself, you should be able to just download this, install it, and run it as a GUI.

Alternatively, you could use this script to transfer the existing annotation that you already made:

This requires more coding knowledge, but can be automated to loop through an entire project with lots of images and do all of the alignments automatically.

4 Likes

Dear @smcardle, thank you very very much for the quick reply and the useful suggestions!!
I’ve already tried the copy-past option and it works, my slides are very close sequential section and there is just a problem with the rotation but in the “Annotations” task there is the rotation tool.
Just a quick question: is there any way to apply the “transformation matrix” obtained through Interactive Image Alignment to the transferred annotation, so as to be more precise?

I will try also the other options to understand which is the one that best fix with my project.
Thank you again very much for your help, it is wonderful how you make your expertise available.

Check this

and

3 Likes

Thanks a lot @phaub, it works very well and I reach my aim!

I have a question about the pixel classifier I initially trained to distinguish the two types of vascular patterns, do you know a way to “validate” this classifier?

Thanks again and have a good day!
Federica

Pete’s answer also apllies here. Read this:

1 Like