Need CorrectIlluminationCalculate help



Hi again,

I’ve been trying for a while to calculate and apply the correct illumination in order to use the modul apply threshold. However, doesn’t matter which set up I use in IlluminationCalculate, they all seem to yield the same result. And its considering to much of the background as signal. Do you have any suggestions as to how I can change the settings (I have already consulted the examples and tutorials for help)?


Here is the pipeline I’m using: Bgd.cppipe (24.0 KB)

and an example pic:



i can’t check your file, but i had this problem some times ago.
I resolved it adding passing in a binary environment.
First thing i converted the image in binary, then used the median filter to cancel the “salt and pepper”.
To define better my cells i used the morph (especially open and close) to fill and make “even” the objects.
If you want to lose a little time tinkering with the settings you could try to treshold the image. This will give you a nice result with less steps.

Another way is to use the ROI to identify the background a subtract it to the image, but i usually use NIS for this and don’t know if cell profiler can do it.


thank you for your comments.
I noticed that I selected the wrong image to perform the calculation on (stupid). I added an enhance feature module to enhance the signal, but its still selecting signal in parts where there is only background.
Unfortunately, the morph module didn’t work for me.
updated pipeline: Bgd.cppipe (24.5 KB)


Usually i use nis or other programs to remove the background selecting a ROI, then i pass the images in cellprofiler to fill the holes and measure what i need.
Probably the are other ways to do the same job in cell profiler, but i don’t know them. Maybe @Minh or @bcimini, that are know cell profiler better than me, know how to do it.


ah, i forgot to say that is better to not use the automatic settings, but to tinker with them.
Looking at your images i’m pretty sure that until that noise is there it will give you headache.
If you can load just 2-3 images i can try to help you.
I saw the google drive link, but my connection is really bad atm and to download 3.1 Gb take forever. Haha



Yes 3.1 Gb is a bit too large for me to try as well. May I ask whether the file contains a few images with super-high resolution , or is that a collection of thousands of images?

If that’s the former (i.e. super-high resolution), I suspect we may need to downscale the image (module Resize) to correctly calibrate the illumination function.

Regarding too much background as signal, you may consider using “Lower and upper bounds” in the module Threshold. Please try to input some small value in the Lower bound box, for e.g. 0.05, and see if it helps.



I used a manual threshold for the Distance-B adaptive identification of secondary objects in the past, but we want to determine the threshold automatically for each image. So I thought I could use the ApplyThreshold to pass the estimated threshold to the identification module. But that’s (for now) not working because of the uneven background… I’ve uploaded some a czi file with just 3 images. Thank you very much for your help. Maybe I’ll have a look if I can use ImageJ to smooth out the background, and pass those images on to CellProfiler…
@Minh, the 3.31 Gb file consists of half of a 96 well plate
I was hoping to circumvent the manual threshold (lower, upper bound) by applying the calculated threshold for each image… But what you are saying is, use a lower bound for the module ApplyThreshold and pass the calculated one on to the other modules?


Oh, now I understand better the whole picture.

If the uneven background is the issue, please consider trying the module Rescale with the following settings:

It then (hopefully) will stretch each image to a similar range of background/foreground and then you can use Adaptive Otsu to do the segmentation, and don’t have to bother using manual threshold for each of the input image. That method is a bit too laborious, no?

(*If you want to measure object intensity later, do measure the intensity from the original image, not from the stretched one).


Thank you @Minh, so I should get rid of the EnhanceFeature module and the CorrectIllumination ones? I’ll try it out now.
Alright, the noise decreased quite a bit, great :slight_smile: Can I get rid of the rest? So I would measure both the image and object intensity from the original images, correct?



You can further smooth the stretched images with Gaussian filter, so the sandy background may vanish. Just don’t overkill :smiley:

And yes, the image and object intensity should be measured with the original images.


Caclculate signal without background

Thanks a lot for your help. I assume you mean the Gaussian filter in the Smooth module? I think that’s smoothing it out a bit too much… I’ve tried the ‘smooth keeping edges’ which seems to be working a bit better. It’s not grainy/sandy anymore, but it still seems to count a lot of the background around the edges… Need to play a bit more with it