Measuring the nucleus to cytoplasm ratio of a cell in an image with significant background noise

333_s_20x.TIF (3.5 MB)

Hello,

I am new to the forum, but was wondering if someone could offer me some help. I have pictures of ovarian cells and am trying to measure the nucleus to cytoplasm ratios of each cell (see picture attached). I am trying to find a way to automate it since I have thousands of cells to measure from multiple pictures. For my initial step I am trying to differentiate the cells from the background and the noise within the image (which is mostly just broken cells that I can’t measure). I have tried to do this with the Weka segmentation tool in Fiji as well as pixel differentiation in ilastik. However, both of these methods have proven difficult, and the AIs are having a hard time identifying my cells in their entirety and separating them from the objects I don’t want to measure (broken cells). I have tried distinguishing my eccentricity, shape, etc., but nothing has seemed to work.

Once I figure out how to distinguish my cells from background and noise, my goal is to have the program measure the nucleus-to-cytoplasm ratio of each cell in an image. I know with Analyze Particles in Fiji you can measure the area of an object, but is there a way to measure the nucleus-to-cytoplasm ratio? I have played with CellProfiler a bit as well as it seems to be able to distinguish between nuclei, cytoplasms and cells, however I’m not very confident it’s measuring the right boundaries as I got some pretty off results when I did a test run.

Any help or advice I can get on this process would be appreciated, thanks!

If you’d like help with CellProfiler, can you post your pipeline, an example image or two, and say why you thought the results were “off” and in what way? With that info, we may be able to help!

Pipeline_W2019.cpproj (786.1 KB)

333_s_20x.TIF (3.5 MB)

Off as in it wasn’t identifying the nucleus as a whole, and was identifying other objects to be the nucleus. Thanks!

Hi @KirstenSteinke,

I took a look at your pipeline and think that it will be difficult for CellProfiler to segment these nuclei and cells using classical image segmentation techniques. If you take a look at the pixel intensity values in the grayscale image that you create in the first module, you can see that background intensity values (~ 0.5) are essentially the same as the intensity values within your cells (~ 0.4 - 0.6). You can see these intensities by hovering your mouse cursor over the image and looking at the bottom of the viewer window (shown at 10:35 in the CellProfiler Workshop video tutorial on the COBA YouTube channel). Since the process of segmentation requires selecting an intensity threshold that distinguishes your objects from background, that won’t be possible with your images (see the same video for more explanation, if that’s not clear!).

Of course, it is possible to distinguish cells from background by visual inspection. Our eyes are mostly using texture differences to identify these cells. For that reason, we recommend using ilastik for pixel classification followed by CellProfiler to measure object properties for your nuclear:cytoplasmic ratio. As a first step, the Pixel-based classification using ilastik video tutorial on the COBA YouTube channel should be helpful.

1 Like

@pearl-ryder thanks for taking the time to check this out! Yeah, I was worried that might be the case. Unfortunately, when I took the images I didn’t have any stains available for the cells, so it’s hard to differentiate between cell intensity and background intensity. I have actually tried using the pixel classification in ilastik, but I seem to be running into the same issue with it not being able to differentiate between the cell and background intensities. Do you know of a way to remedy this? I have also tried training with something similar in FIJI (Trainable Weka Segmentation), but am getting the same result. I used the same picture uploaded previously and just followed the suggested protocol laid out in the Pixel-based classification ilastik video tutorial.

@KirstenSteinke, We suspected that ilastik could perform reasonably well on your images because the granularity is so different between the background, cells, and debris. For that reason, I took a look at your image in ilastik. I was able to get a reasonable differentiation of cells from debris and background. This screenshot shows the probability for each pixel. You’ll see that blue corresponds to background, yellow to cells, and pink to debris:

Here’s a rough outline of my approach for you to try to replicate:

  • I selected all features in step 2
  • I assigned pixels to each class using a brush of size 1 pixel. This approach helps to avoid accidentally assigning pixels to the wrong class
  • I started by assigning about 10 pixels for each class. I then turned on “Live Update” and turned on the “Uncertainty” mask in order to identify which pixels were the highest uncertainty to prioritize assigning those pixels to the appropriate class. I also used the “Segmentation” layer to label pixels that were currently segmented in the wrong class
  • I continued this process until labeling more pixels didn’t significantly change the results
  • If I had access to multiple images, I would label pixels from many different images in order to increase the robustness of my training data

This approach is outlined in our “PixelBasedClassification” tutorial and the accompanying video tutorial on the Center for Bioimage Analysis Youtube page.

  • For segmentation, note that we recommend using CellProfiler to segment your images after you create the probability images using ilastik rather than using ilastik for segmentation. That protocol is outlined in the tutorials referenced above.

Good luck!

1 Like

I had some “spare” time waiting in a call (:slight_smile:) and gave the ZEN Image Analysis incl. Trainable Segmentation a try on your image. And the results look quite promising to me …

Hi @pearl-ryder,

I really liked your cool COBA tutorial video on using cellprofiler with ilastik output!

Cheers!

@k-dominik, all credit for that video goes to @Nasim. And I agree, it’s a great video!

@pearl-ryder thank you! I will look further into this technique. Did you have the image in 8-bit grayscale before running the ilastik pixel segmentation?

@sebi06 This looks great, and I agree it’s very promising! I am unfamiliar with ZEN, could you tell me a little bit more about it and what settings you used to get such great segmentation? Is there a way to look at the area of the nuclei and cytoplasm within ZEN or is it something that I should do after segmenting in FIJI or CellProfiler? Thanks so much!

Yep, @KirstenSteinke, I processed the image as an 8-bit grayscale image. I also cropped the scale bar off. Ideally you’ll have an unprocessed image from your microscope to use that doesn’t have a scale bar burned on.

@KirstenSteinke. Thx for the compliment. In principle I did something similar to what si possible with ilastik (and a bit more)

  • trained ML pixel-classifiere in ZEN Intellesis Trainable Segmentation incl Conditional Random Field (CRF) as PostProcessing
  • saved the trained model
  • created a image analysis pipeline from this model, where the segmentation step is done by the trained model + CRF
  • included some binary post-processing and object filters

And this results in the Analysis results shown above, where you can now measure many parameters for for cells.

For more fancy Image Analysis ZEN allows you plug this model into any image analysis pipeline, e.g. for Zone-of-Influence, Translocation, Cell Counting, …

And yes, in ZEN you can definitely look at the area of nuclei and cytoplasm and also visualize the results. But of course, this can be also done in Fiji or CellProfiler or QuPath or … The choice is yours here.

ZEN is a commercial package but also available as trial version. Current version in ZEN blue 3.2 but we are about to release 3.3, which contains some nice improvements.

@sebi06 it seems that ZEN is only operable on Windows computers, and I have a mac. I have read that the best alternatives on a mac are imagej and ilastik for segmentation. I have tried both of these, but cannot get it to distinguish cell borders as well as you have done here. Do you know if there is a CRF function for imagej/ilastik when training the segmentation? What kind of binary post-processing and object filters did you use?

Hi @KirstenSteinke,

yes, Windows only. But maybe you use “BootCamp” or whatever this is called on MacOS. I am not sure, if ilastik or ImageJ (WEKA) offer CRF postprocessing for the probability maps retrieved from ML segmentation.

And for more details on the processing steps check the screenshot.

  • Minimum Area = 1000 --> cells must cover >=1000 pixels
  • Min. Hole Area = 500 --> only fill holes up-to 500 pixels
  • Dilate = 4 --> 4x times
  • Seperate = Watershed on binary image, kernelsize of filter = 6 (count is misleading)
  • Min. Confidence = 0 --> does not influence the result, when using CRF since this recaluted the segmentation binary output and therefor renders the confidence levels (probability) invalid

For more details you have a look here: ZEN Machine Learning