Segmenting wounds / scratches with CellProfiler


I found the example wound healing CP pipeline very helpful in segmenting images from these assays. The pipeline does well in cases where there’s a visible scratch/wound but I find that it needs some tweaking for cases where the wound is nearly filled in the later time points of these assays. Attached is the original image (CPImage.jpg) and the analysis result I get from CP (CPResult.png). The analysis attached was generated using the built-in pipeline except I modified the ‘Typical artifact diameter’ to 10 in the Gaussian filter “Smooth” step.

It looks like the darker regions of the image get segmented as objects. What is the strategy for avoiding this? Is there a pre-processing step I could add that equalizes the contrast/brightness perhaps? Or is it more likely to weak by tweaking the object finding parts of the pipeline? Thanks.

Based on your image, I think that’s exactly what’s needed. I suggest adding the following modules:

  • CorrectIlluminationCalculate: You can probably use the default Regular method, and then select “Fit polynomial” for the “Smoothing method”. If that doesn’t quite cut it, select “Median Filter” instead then “Manual” for the filter size, and then the size to suitably something suitably big (maybe 100-200 for your images?). The idea in either case is that you want the output function to resemble the illumination pattern that you see but without reflecting the features of the actual cells themselves. If you still see the cell features, you can set the filter size very bigger.
  • CorrectIlluminationApply: Apply the illumination correction function derived from the prior module by dividing the input image with it to get the result.

You could add these before or after the smoothing step, and see which one works better. But I’m guessing probably before.


Hi Mark,

Thanks very much for your reply and suggestion, greatly appreciate it! I tried your suggestion but I’m consistently running into a case where the segmentation look visually correct most of the time but then give a “% covered” metric which is incorrect for the image. For example, the attached pipeline on the early time point of the wound shows that the area is 89% covered with objects, while it should be more like 20% (or less). On very dense images (also attached), it gives that the area is 100% covered, which seems too smoothed.

Any idea what might be going wrong with this variant of the pipeline? I attach the pipeline code as well.

Update: One of the things that is unclear to me is why there’s a discrepancy between the tissue outline panels and the “% area covered by objects” metric. The discrepancy is most visible in the t0 segmented image, where the green tissue outlines actually look correct, but it doesn’t look like the area inside the outlines is used in the computation of “% area covered by objects”.

Thanks again.

wound_pipeline.cp (7.48 KB)

This is because the module is detecting the wound space as part of the object. You can tell since the upper-right panel shows this space as colored (detected foreground) and not black (background). This is where the apparent discrepancy is coming from, BTW: it’s detecting almost everything in the t0 image.

For the t0 image, the illumination correction isn’t helping; it’s actually making things worse since it’s decreasing the intensity of the actual cells making them close to background. The idea behind illumination correction is to for the illum image produced by CorrectIllumCalculate to reflect the illumination heterogeneities and not the actual cellular features, usually via heavy smoothing. If you can still make out the cells/cellular regions in the illum image, then it’s not appropriate for the application.

I would either:

  • Remove the correction step and use the smoothed image in IdentifyPrimary,
  • Use the IllumBlue image in IdentifyPrimary (turn off rescaling in CorrectIllumCalc first), since it highlights the foreground/background pretty well without regard to the individual cells

I’d probably do the former; I get a % coverage of 27% after changing the thresholding method to 2-class from 3-class.

I can’t really tell with this image since it’s a different resolution than the t0, so the pipeline as-is wouldn’t be expected to work.

Hi Mark,

Thanks for your response. I removed the Illumination correction step as you suggested and instead used the Smooth’d image as input to IdentifyPrimary and changed 2-class to 3-class (which was key). This works on the 0 time point scratch, and like you I get that ~27% of area is covered by objects which makes sense. However, the same pipeline does not work on the attached (same res) image where the area is totally covered by cells. When I run the pipeline on it, it says that only ~57% of the cells are covered.

This is the recurring problem I have: I cannot get the same pipeline to work on both the open wound and the fully closed one. Is there a way to tweak this pipeline so that it works on the open wound but quantitates something like 100% coverage for the image showing a confluent plate of cells?

Thanks very much for your help!


What you’re asking is a bit tricky, in that we want CP to somehow know that the dark areas in the early images are part of the background, but in the later images, that they’re part of the cell.

I’m attaching a pipeline that takes a different approach. Rather than idenfiying the cells outright, I use an edge-detection filter to get the cell edges, and then identify the cells from that. I’ve left the illumination correction modules in, but I think an advantage of edge detection is that it removes the background gradations automatically. So you can try the EnhanceEdges module with the Corrected image or Smoothed image and see how it does. It appears that the Smoothed image as input does better, I think.

2013_09_04.cp (8.17 KB)