How to minimize image-to-image variations?

I am trying to identify cell type A (green) and B (red) in rodent brains and quantify their sizes, and pixel intensities. Although I tried to minimize technical variations before acquiring images (methods for tissue perfusions, brain slicing, storages, and immunohistochemical staining and exposure under the microscope etc were pretty much controlled), there are inevitable differences in overall background intensities among images.

I am trying to apply same criteria to all images (i.e., use the same thresholds & filtering methods for cell A and B identifications, except the threshold correction factor to identify a region of Interest), however, the set of criteria fails to identify objects in some images while it works on other images.

What do you recommend to minimize this image-to-image variations? Should I rescale image intensities or should I change threshold values and filtering factors for each image rather than applying constant values for all images?


Sorry this post got past us. In the case of IHC images (what you have, right?) this issue of background normalization, but more importantly foreground intensity, is a tough problem. PLEASE NOTE though, that intensity in immunostains are NOT stoichiometric, i.e. as this page says:

So in general, we would suggest to:
(1) Use the UnmixColors module to try and deconvolve the stains into grayscale channels
(2) Segment cells or regions with Identify* modules
(3) Measure sizes etc but note that intensity measures are not necessarily linear.

To normalize background or regional intensity variations, please see our “Illumination Correction” pipeline and tutorial: … Correction
Also, applying various morphological operators (e.g. EnhanceOrSuppressFeatures->Enhance->Speckles) can provide an effective background normalization as well. These may affect the foreground intensities, but noting the previous caveats about nonlinear staining intensities, this is not much of an issue.

Hope this helps!