I am trying to identify cell type A (green) and B (red) in rodent brains and quantify their sizes, and pixel intensities. Although I tried to minimize technical variations before acquiring images (methods for tissue perfusions, brain slicing, storages, and immunohistochemical staining and exposure under the microscope etc were pretty much controlled), there are inevitable differences in overall background intensities among images.
I am trying to apply same criteria to all images (i.e., use the same thresholds & filtering methods for cell A and B identifications, except the threshold correction factor to identify a region of Interest), however, the set of criteria fails to identify objects in some images while it works on other images.
What do you recommend to minimize this image-to-image variations? Should I rescale image intensities or should I change threshold values and filtering factors for each image rather than applying constant values for all images?