Input my own image into pipeline

Cellprofiler is exactly what I need!

I have:
grayscale tissue autofluorescence microscopy images

I want to:

  • quantify cell and nuclear change of intensity, size, shape, crowding (hallmarks of cancer)
  • export into Excel for plotting and comparison between normal and abnormal tissue
  • determine the nuclear differences between regions of interest within a single image (ex1 attached)

  • determine the nuclear differences within an individual image (ex2 attached)

  • compare different images (for example, quantify difference between ex1 and ex2)

I successfully ran the Speckles pipeline using the two default images. When I tried to apply only one image, ex1 (image attached):

  • I had to convert to .tif from .jpg
  • the output access file was empty
  • the output data images were blank (white screen)
  • error occurred:
    “There was a problem running the analysis module IdentifyPrimAutomatic which is number 07. Error using ==> CPthreshold at 206 Image processing was canceled in the IdentifyPrimAutomatic module because you have chosen to calculate the threshold on a per-object basis, but CellProfiler could not find the image of the objects you want to use.”

What format do my images need to be in? (RGB, .tif, etc.?) I did get grayscale .jpg to work, but only after putting it in a separate file.
Do I need two images for comparison?
What is the difference between the default 1-162hrh2ax2.tif and 1-162hrhoe2.tif? Are they Green and Blue components?
Are the Tissue Neighbors and Speckle pipelines the most appropriate for what I am trying to do?
Are there mapping capabilities from the original image to exported data? That is, once exported to excel, how do I identify certain regions? (for example, left vs. right in the first uploaded image)

I am just starting with this and am already stuck. Any suggestions would be greatly appreciated, thank you!


I think CellProfiler can definitely help you out, but you’ll need to do some work to come up with a pipeline that fits your needs. The speckles pipeline looks at at a nuclear stain (Hoescht) and DNA stain (h2ax) and relates the intensity of the two within each identified cell. So yes, you need two images, with two different stains. Additionally, all of the settings are very specific to the that experiment- the method of h2ax identification, for example, is Ostu Per Object, using the fact that the nuclei have already been identified. That is why you were getting that error.

Usually to quantify changes in nuclear morphology, cell culture where the nuclei are stained is the easiest for CellProfiler to handle. Tissue samples are tricky to deal with, but if you try inverting the image, (because CellProfiler looks for bright objects on a dark background) and perhaps an illumination correction, you might be able to use IDPrimAutomatic to find some of the cells. The help section for each of the modules explains all of the settings that you will have to change to make the modules work with your images.

Give this a try, and let me know how it goes!


… and keep in mind that it can be very difficult to automatically identify and measure cells with CellProfiler if you can’t readily identify the boundaries of cells by eye in your images. To me, it seems to be very difficult to identify very many cells in the images you’ve attached, but perhaps your eyes are better-trained to these images than mine!


Thank you for your replies. I successfully ran the Neighbor Pipeline and am unsure how to interpret the results. The inverted module seems to have several bright spots visible by eye:

but they do not seem to be identified or outlined:

Should I make the cropped window smaller, or somehow alter the smoothing filter? I tried to do some manual image processing to increase the contrast before uploading the imaging into the pipeline, but the output error suggested I was loading RGB (all of my images are grayscale).

Are the objects with the most neighbors grouped together as one big area? Although I can see individual bright spots, it looks like one big spot from:

Please advise, thank you!


You should definitely crop your images so that there isn’t a big black space in the background. The algorithms in CellProfiler look at the pixel intensities in your entire image, even the part where you know there is no image of your tissue sample! So taking this out may improve results somewhat.

I would definitely not do any manual image processing beforehand (do you mean Photoshop?) because it’s hard to know exactly what you are doing when you play with all those tools. CellProfiler, on the other hand, has many different modules that allow you to select the exact filter size, for example, that you would like to use when smoothing, or the specific algorithm you would like to use for Illumination correction.

CellProfiler, in IdentifyPrimAutomatic, looks at pixel intensities and basically says, where is the brightest pixel in this neighborhood? where are the dark spots in between these bright areas? Let’s call these bright areas, delineated by darker areas, our cells! It’s a lot more complicated than that, and you can read about all the different algorithms in the manual/help but thats the basic idea. Here, you can look at your image and see CellProfiler has a tough time looking at your tissue sample and finding the cells from all the bright and dark areas- it’s doing the best it can. The different colored blobs output by IDPrimAuto are the identified cells.

As for the neighbors module, i’m still not quite sure why you are using this module. Is your assay attempting to quantify the degree of clustering of cells? Again, the blobs are colored to tell you different things about what CellProfiler has found- what perecent of cells are touching, or how many neighbors they have. So the red ones have lots of neighbors, blue ones have few. First try the cropping though- CP is probably very confused by your image!

Hope this helps!


Hi, Kate -

Thank you for your reply. To clarify:

I used the CP crop module embedded in the Neighbors pipeline, so the black background is not from my original image. Here is an example of the raw and then cropped image from the Neighbors pipeline:

I agree that I am confusing CP, but I’m not sure how to improve the tissue image for analysis. When I mentioned the other image processing, this was with spectral images (not Photoshop) to make the bright spots brighter, and the dark spots darker (increase the contrast).

From the IdentifyPrimAuto output of the above example, the Overlay dots seem to be right on top of areas that seem bright, rather than outlining them:

Would you suggest a different pipeline? I am not (unfortunately) working with assays, but I thought the nuclei in the tissue might be detected by the Neighbors pipeline if there was enough contrast, which is why I started with this pipeline. I want the same output that Neighbors provides (I think) - mean intensity, crowding (which I interpret to mean # of neighbors, is this wrong?), change of cell area (nucleus enlargement?) etc.

Are there more appropriate pipelines? I really want to use CP and will try to combine different modules, but my programming skills are very (very very) limited and it took me weeks just to get this far on a user-friendly program like CP! (…and I am still stuck). I am really glad that you and your team are so great about helping computer-challenged students like me :mrgreen: I am hoping that more people in my group will start using CP.

On a tangent, is there specific nomenclature for the LoadImage module? Sometimes my image will load, other times the error says there are no images with the identified text. Also, is the position to the left of the dot 0 or 1?

Thanks again, looking forward to hearing back!


Hi Bev,

I think you are definitely on the right track with the attempting to use the Neighbors example pipeline for your application. But as you note, the tissue image needs to be improved. If you look at the input image for the Neighbors pipeline (clones1.JPG), you can see that the image has pretty crisp, clear lines between the cells, which makes it easier to IDPrimAuto to work well. Unfortunately, your images are not as distinct.

One possibility is to use SmoothOrEnhance with "Enhance Bright Speckles (Tophat filter) as the choice under “Enter smoothing method…” and then enter the approximate width of the cells in pixels under “If you choose any other setting beside ‘Fit Polynomial’…”. What this module does is attempt to correct the image for uneven illumination or fluorescence by subtracting out variations in intensity larger in spatial scale than the width value you input. The result should hopefully be an image where small-scale cells edges are enhanced in brightness compared to the large-scale tissue background. If the result looks good (i.e., the cell edges are more distinct), this image can be input into IDPrimAuto.

To answer your 2nd question, there shouldn’t be any specific nomenclature needed for LoadImages. If you are getting error messages about whether the image is present or not, have a look at the input image directory you’ve specified to make sure the image is really there. However, I’m not sure whether I understand what you mean by “position to the left of the dot 0 or 1”.

Hi, Mark -

Thank you for your response. I have been trying to improve my image for CP to better recognize. I want to try the same approach in CP (such as normalization) such that image analysis is confined entirely in a CP pipe.

How do I load a single module that is not part of the example pipe, such as the one you suggested, SmoothOrEnhance? For example, I am trying to modify Neighbors Pipe and want to input CorrectIlluminationCalculate module to generate a smoothed image to then use CorrectIlluminationApply for normalization. I feel silly because I realize this is so basic I should know how to do this, but all I have are the text files, and File Open in CP does not have a single module option (only pipelines or images).

To clarify the nomenclature question, there are two options to LoadImage - text or position. How do I specify position in the “order option”?

Thank you very much!

nevermind about the module part! (hit the + and - in the adjust module section of the main CP window) but I am still working with improving my images such that CP has an easier time recognizing the features of interest :smile:

Hi Bev,

Good to hear that you figured out how to add the modules.

Also, for future reference, to use the order option, you do the following:
(1) Specify “Order” under “How do you want to load these files?”
(2) Enter the position (1,2,3,…) in “Type the text that one of image…”
(3) Specify the name in CellProfiler under “What do you want to call these images…”
(4) Repeat (1-3) for each image position.

For example, if you have 2 sets of images, one DAPI followed by GFP, you can specify “Order” then “1” and “OrigNucl”, followed by “2” and “OrigGFP” (no quotes when you enter them). The entire image set of however many images is then grouped pair-wise with the order specified.

You can enter 3 images in each LoadImages module. If you have more than that, you can add another LoadImages module and specify the order beginning at 4 onwards.

Hope this helps!

Hi, Mark -

Thank you for the image order option clarification.

I am still trying to optimize my images for CP to recognize abnormal nuclei. After manually normalizing, I tried the following parameters per your suggestion in the below pipe:

Enhance Bright Speckles (Tophat filter)
pixel width = 15
filter size = 5,40
object pixel diameter = 10,30


I always get the message, “The images you have loaded in the SaveImages module are outside the 0-1 range, and you may be losing data” and also received the error, “There was a problem running the image analysis. Sorry, it is unclear what the problem is. It would be wise to close the entire CellProfiler program in case something strange has happened to the settings. The output file may be unreliable as well. Matlab says the error is: Undefined function or variable ‘FeatureNbr’. in the DisplayHistogram module, which is module #07 in the pipeline.” How do I fix these?

Below is the output before the pipe got to the histogram module:
after inverting original, SmoothedEnhanced output - why is the left and bottom margin swooshy?

There are now two colored dots on the overlay from IdPrimAuto:

Are there other modules you might suggest to improve image quality?

Thank you!


Hi Bev,

Unfortunately, there was a bug in DisplayHistogram that was causing the error that you saw. We have fixed this in an upcoming 5811-bugfix release which should be available soon.

As far as the “swooshy” edges on your image after smoothing, this is because you’re cropping (into a rectangle maybe?) before you process your images. They don’t need to be cropped, because there is no plate edge artifact like in the example. CellProfiler is probably just confused by all the extra zeros around your image introduced by the cropping. (although I’ve never seen this swooshiness before, so I’m not really positive)

SmoothOrEnhance may help improve identification of nuclei somewhat (you could try using a tophat filter), but going back to imaging and sample preparation will make image analysis soooo much easier. Is there any stain you can use to identify the nuclei or interest? Tissue samples are tough to work with unless there is some kind of staining.


Hi, Kate!

Thank you for your reply. No, we do not work with stains. I will continue improving my images before trying to analyze with CP.