Defining ROI in tissues and measuring intensity

Hello,

I’ve searched the Q&A and for similar questions but have not come across a similar example. I’m looking to create a pipeline for identifying basal and suprabasal compartments based on a DAPI image of epithelial tissue, followed by measuring intensity within each compartment for two fluorophores (one red, one green). Basically, for each sample field of view I have 3 images: DAPI for nuclei, green for cytoplasmic protein #1, red for cytoplasmic protein #2.

Is there a recommend way in CellProfiler to automatically (or manually) identify the basal layer of DAPI stained cells and segment the tissue into 2 compartments based on this?

Any advice or links to related pipelines are greatly appreciated, thank you!


Hi,

To do this task manually is fairly straightforward: you could use the IdentifyObjectsManually module to hand-drawn each compartment, one for basal and the other for sub-basal. These two objects would be available down-stream for measuring intensity from whatever channels you wish.

To do this task automatically is harder, if at all. One approach would be blur the image by a large amount using the Smooth module, and then identify the suprabasal compartment (the compartment with the cluster nuclei?) using IdentifyPrimaryObjects with no declumping, no discarding of objects touching the image border, using a really large upper diameter limit, and perhaps with thresholding method set to the default Otsu global, 2-class. I tried this out with a smoothing filter size of 100 in Smooth, and identification wasn’t too bad.

The problem I see with this is two-fold:

  • The bulk of what is detected will be regions biased towards clustered nuclei. If there are nuclei separated by distance from the main mass but still part of the suprabasal compartment, chances are that they will be missed.
  • The bigger problem (I think) is that not a straightforward way to identify the basal compartment. In your example, I can tell by eye that the basal compartment is below the mass of nuclei (I think), but there’s no real way to distinguish the space above as the same thing; from the computer’s standpoint, they’re both largely empty spaces. Unless you always captured your images with the basal compartment downwards, there’s nothing to really cue from. On the other hand, if you can guarantee the compartment orientation, then you do the following:[list]*]ConvertObjectsToImage with the suprabasal region as input, and converted to a binary image.
  • ImageMath to invert the pixel intensities of the object image.
  • IdentifyPrimaryObjects on the inverted image to identify the non-suprabasal spaces. Set the thresholding method to manual with a value of 0.5, no declumping, an arbitrarily large upper diameter limit as before, no discarding of objects touching the image edge.
  • FilterObjects on the non-suprabasal objects to exclude all but the bottom-most object. Set the filtering method to Maximal and the measurement to Location as the category and Center_Y as the measurement.

This will yield a basal object that you can measure as with the supra-basal object. /*:m][/list:u]Hope this helps!
-Mark