Help: RelateObject + Secondary Object Measurement dual positive surface markers. New User

Hi everyone,

Just started to learn Cellprofiler and I am quite enthusiastic about what it can do so far. Starting from the basics, I am simply trying to count cells which are dual positive for two different cell surface markers (e.g. green and red) for now.

Initially, I have set up a pipeline for identify primary object --> identify secondary object. However, this gave me 100% count for my secondary objects. Reading through previous posts here, I realized this was a wrong set up as each primary object corresponds to a secondary object, thus giving identical counts.

I have then progressed to identify primary object (green) --> identify primary object (red) --> Relate object (green as parent, red as child) --> Filter object parent based on child counts, minimum 1, no maximum value. In between each of these steps, I have also adjusted image intensity for each image in green and red channels to reduce background signals.

This is where I ran into problems. By manual counting using other softwares, I believe I am getting at least 90% (and above) dual positive green and red cells. However, CP gives me approximately about 40-60% positive. Unlike previous posts where researchers are looking at nuclei in cells where its fairly clear that nuclei are inside a cell, I am trying to count dual positive cell surface markers. I’m wondering if somehow CP has trouble counting this? Or perhaps I used a wrong pipeline and there might be a better approach to this?

Should I post some example images here (edit: uploaded)?
Thanks for the help and any suggestions!

Eg1 (Green Red).tif (870.7 KB)
Eg1 (GreenOnly).tif (312.9 KB)
Eg1 (RedOnly).tif (753.5 KB)

images would certainly help.
In the absence of those, i would set up the pipeline like this:
Assuming you have subtracted the background intensity and you have consistent intensity levels between images":

  1. find cells in one channel
  2. use morphological operations to close holes or dilate cells slightly if necessary
  3. run MeasureObjectIntensity on the found objects, measure the intensity in both channels
  4. FilterObjects based on intensity measurement. If integrated (or median, depends on your question) intensity exceeds a certain threshold in both channels -> dual expression.

Possible variations:

Find cells in both channels and combine segmentation results to make sure you find all cells.
Then measure the intensity in both channels for the combined object set.

2 Likes

Hi Volker,

Thanks for the quick response and the help! I’ve now uploaded example images of what I’m trying to count. A quick summary: There are more red objects than green as the red marker is expressed on more than just green objects. However, almost all green cells should have red (<99%). Of course, to complicate things, some of the objects are very close to each other. Therefore, the questions I am asking for now are:

(1) How many total green objects there are
(2) How many (+percentage) green objects have also red objects
(3) How many total red objects there are
(4a) I note there seems to be clusters of objects, can I get CP to tell me how many clusters there are and (4b) how many various number of objects in each cluster?

Based on the example images, would you still recommend the suggest you’ve made? Some of your suggestions might be a little too difficult for me :sweat_smile:

I had a look at the images. These don’t seem to be raw images from the microscope but some sort of screenshot, which makes it harder to assess.
The basic approach of MeasureObjectIntensity in both channels and then setting thresholds to determine whether the object expresses in the green channel and/or in the red channel still holds.

The problem will be the segmentation of the objects, as you have a wide variation of intensities and also significant overlap. If you can add a bright nuclear marker (such as DAPI) to make sure you find all cells that would simplify matters. You will probably also have to correct for uneven illumination.

Hi Volker,

You are right, these are snapshots. I was actually acquiring time-lapse images with multiple FOVs and the raw data from the microscope comes in a single file which makes it hard to open in the usual ways. My approach so far requires me to import the file into Imaris per FOV, save each FOV into something FIJI can open and use FIJI to save a image sequence files which CP then recognizes.

I attach here the raw green and far red image sequence files exported from FIJI if you wish to take a look.

I think your suggestion on measure object intensity, setting thresholds and then relate object and getting CP to give me counts on objects expressing both green and far red would be the best way forward for now.

Finally, I am doing live intravital imaging in mice, injecting and labelling nuclei is typically not an option :sweat_smile:.

Also, I’m very likely stupid but I only had a one day CP workshop and trying to learn this by myself at the moment is enriching but does make the going quite slow. I don’t know how to do object segmentation and correction for uneven illumination unfortunately. I might have to contact Beth at the Broad Institute in the near future.

Thanks for your help and patience so far though!

Eg1.1 (Green).tif (7.3 MB)
Eg1.2 (Red).tif (7.3 MB)

Hi Volker,

Sorry, just feeling a little stupid. I set up a pipeline like what you recommended:
(1) Identify primary object 1 (after rescale intensity)
(2) Identify primary object 2 (after rescale intensity)
(3) Measure object intensity 1
(4) Measure object intensity 2
(5) Relate objects
(6) Filter objects
–> This is where I didn’t know exactly how to proceed:
(a) select objects to filter (relateobject)
(b) select filtering mode? (measurements, rules, image, classifiers)
© select filtering method? (limits, minimal, maximal, minimal per object, maximal per object)
(d) measurement to filter
(i) category (intensity I chose here)
(ii) measurement (integrated intensity, min intensity, max intensity, edges??)
(e) filter using minimum value yes or no and filter using maximum value yes or no

Also, I get the feeling I should add two intensity measurements to filter by where one intensity should be from green channel and second intensity from far red channel.

Thanks!

Your feeling is correct: filter objects based on the intensity measurements in both channels. You can get both measurements simply by using add another image in MeasureObjectIntensity.
No RelateObjects needed, but you will have to combine both object sets into a single set. Unfortunately I think such a method is not implemented. You may have to take the extra steps of converting the objects to masks, combining the masks and identifying all objects from the combined mask.