Transfer cell-boundary/polygons to another image of different size

Hi, I have DAPI (nucleus) and CD45 (immune cell) marker in two images of different sizes. How do I use (or transfer) the cell-boundaries/polygons/objects from the DAPI image to detect CD45+ cells in the second? I think there are two issues here: (1) objects are required to be transformed to match the cells in the second image, and (2) using the transformed polygons to detect positive cells and extract other features.
New to QuPath. Any help would be appreciated.

Convenient timing, but have you looked at this post from… yesterday I think?

Note that regardless of which image the objects were generated in, features can be added using Add intensity features.

Thanks for your reply. But I am getting an error when I hit the Estimate Transform button - “Unable to estimated transform - result did not converge”. (I am using two .czi (one brightfield- single channel, one fluorescent - multiple channel) images. The BF image contains nucleus object detection which I want to transfer to the IF image.)

Another approach I took was to convert the cell-detection into binary masks and save as an image. I CAN align this binary image to the IF multiplex. I start with manually changing the values in the Transform Matrix to make the “moving” image-size as close to the “fixed” image. Then I hit the Estimate Transform button - and it works! But, not sure about next step - how do I use this mask/overlay to extract intensity from the IF image (since these are not object polygons)? Here’s a snapshot of the alignment using the Alignment Tool.

1 Like

That looks fairly similar to using the “Points” option to line up cells between two images, but should work well. Intensity matching would be… problematic between brightfield and IF images, so that not working is not a surprise. And technically, the brightfield is three channels, while the IF is… some number. It’s a bit confusing since you said you were matching DAPI in the first post, which should have been a similar image type to the second fluorescent image.

I would recommend testing various ways of creating objects, essentially. Once you have the objects, those can be transferred between images as per the links above (in your case, I assume you would want to transfer the objects into the IF image). Simple thresholding, pixel classifier, maybe even cell detection. I’m not sure which will work in your case, without access to the file. You’ll have to try some things out. Maybe DoG superpixel segmentation would work well for that kind of mask.

Once you have the transforms saved, though, you can test out as many different variants as you want, since the objects can quickly be shuffled back and forth. It may take a few tries to get used to setting the createInverse variable correctly.

Sorry for the confusion. My original plan was to detect cells from the DAPI, but soon realized it was bleached and had lot of background. Instead, I am now using a Hematoxylin image (of different size than the IF) to detect the nucleus/cells. Here’s what I’m doing step-by- step:

  1. Created objects (using Cell Detection) in the Hematoxylin image
  2. Estimated the Affine Transform matrix using the interactive tool - manually changed the values as much as possible since Intensity based AUTOMATIC alignment between BF and IF doesn’t seem to work.
  3. Ran the “Multiple Image Alignment and Object Transfer” script, but got an error:

ERROR: MultipleCompilationErrorsException at line 23: startup failed:
Script9.groovy: 24: unable to resolve class qupath.lib.gui.helpers.DisplayHelpers
@ line 24, column 1.
import qupath.lib.gui.helpers.DisplayHelpers

Based on these, I have a few questions:

  1. Does this script support object transfer for the same image-type only (i.e. IF to IF, BF to BF)?
  2. Does Mike’s script transfer objects (i.e. cell detection) or just annotations?
  3. What’d the best way to send you my images?

Approach 2: Creating annotations from masks as opposed to transfer
As I mentioned, I also tried to generate binary mask from the cell detection. When I tried to import the mask and create annotations using the script here (Script(s) of the Day: Exporting & importing binary masks I get error:

ERROR: MultipleCompilationErrorsException at line 23: startup failed:
Script18.groovy: 24: unable to resolve class qupath.imagej.objects.ROIConverterIJ
@ line 24, column 1.
import qupath.imagej.objects.ROIConverterIJ
^

Script18.groovy: 27: unable to resolve class qupath.lib.scripting.QPEx
@ line 27, column 1.
import qupath.lib.scripting.QPEx

What version of QuPath are you using?
Also, @Mike_Nelson

Edit: The scripts posted in that thread do not have a DisplayHelper import statement, so you are not using those scripts. That must be one of the older scripts, as described here:

You are using older scripts in a newer version of QuPath, see:

Transfer of objects includes all contained objects such as cells, tiles, other detections, etc, provided they are child objects in the hierarchy.
IF to brightfield should not matter, only the objects themselves and the affine file. The scripts require all steps to be followed as far as generating the affine file, however. Skipping to running the transfer macro without setting things up correctly will not work.

Removing “import qupath.lib.gui.helpers.DisplayHelpers” worked! Thanks!
Just curious, how does the Point Annotation work to align the images. Intensity based alignment either does not converge or inaccurate for my images.

Makes sense. Thanks for the response.
(The script worked after I removed a line from the old version.)

I have commented the lines for importing ROIConverterIJ and QPEx.
However, new errors are generated:

INFO: Unable to parse annotation from TestImage_cellObj_2,0,0,44181,38661-mask.png: No such property: ROIConverterIJ for class: Script18
ERROR: MissingMethodException at line 63: No signature of method: qupath.lib.objects.hierarchy.PathObjectHierarchy.addPathObjects() is applicable for argument types: (ArrayList, Boolean) values: [ , false]
Possible solutions: addPathObjects(java.util.Collection), addPathObject(qupath.lib.objects.PathObject)

What are the x and y values- where I can get these for renaming the mask image as per instruction:
“[Short original image name][Classification name]([downsample],[ x ],[ y ],[width],[height])-mask.png”
Can you give me an example of how the mask should be renamed?

Sorry, haven’t really done much with the mask script. Never used it in fact.

ROIConverterIJ is basically the same error though. You need to change it, not comment it out. The function itself was renamed and I think the location was changed.

Now that I think about it, you might be better off looking through the documentation that pertains to 0.2.0m8+
I’m not sure the exact code you want is there, but there are sections like:
https://qupath.readthedocs.io/en/latest/docs/advanced/imagej.html#sending-objects-to-qupath
https://qupath.readthedocs.io/en/latest/docs/scripting/overview.html#creating-objects
https://qupath.readthedocs.io/en/latest/docs/scripting/index.html

@Research_Associate @Mike_Nelson
Thanks for the direction - I managed to transfer the cell-objects to the IF image.
However, QuPath retains the image properties from the BF image. For example, the nucleus and cytoplasm measurements shows “Hematoxylin” and “DAB” and not the IF channels of the current image (see screenshot).

2 Likes

That is expected, those measurements are created when the cells are created. This allows you to transfer objects across multiple images and collect information from each. Generate new features in the current image. The link in my first post about adding features goes into some detail.

Another example of generating features by scripting here: