I’m still an ImageJ beginner and am trying to figure out the best way to process some of my images.
In my images, I am working to get the integrated densities of some dark spots (ROIs). However, I am limited by uneven lighting.
I have looked at several methods of background correction and have mostly been using flatfield correction where I have a background image that I subtract from my analyzed image.
Recently, I’ve taken a couple of images of the same spots, both with the same camera settings and lighting. When I process the two individual spotted images using the same background image for the flatfield correction, I get spots with slightly different areas in the results… I think I’m having some variations from image to image with patterns from the lighting. Is there a way that I can better extract the spots from the images so that they will have the same areas from image to image as their final (background-subtracted) results?
All images are in DNG format and are imported as hyperstacks using Bio-Formats. I then split the color channels and work within the red channel only. Using duplicates of the foreground and background images, I apply a Gaussian Blur of sigma 2 to each foreground image, then I use the image calculator to subtract the background image from each foreground image. I then use Otsu threshold and convert to a mask that I redirect to the original spotted images.
Attached are the images after choosing the red channel but before any further processing. I’ve also included the resulting areas that I got.
Could anyone provide some ideas for producing two resulting (background-subtracted) images that have the same ROI spot areas?