Areas of Thresholded Objects Changing between Images

C1-Background.tif (2.3 MB) C1-Foreground 1.tif (2.3 MB) C1-Foreground 2.tif (2.3 MB)

Screenshot 2020-10-08 153518

Hi Everyone,
I’m still an ImageJ beginner and am trying to figure out the best way to process some of my images.

In my images, I am working to get the integrated densities of some dark spots (ROIs). However, I am limited by uneven lighting.

I have looked at several methods of background correction and have mostly been using flatfield correction where I have a background image that I subtract from my analyzed image.

Recently, I’ve taken a couple of images of the same spots, both with the same camera settings and lighting. When I process the two individual spotted images using the same background image for the flatfield correction, I get spots with slightly different areas in the results… I think I’m having some variations from image to image with patterns from the lighting. Is there a way that I can better extract the spots from the images so that they will have the same areas from image to image as their final (background-subtracted) results?

All images are in DNG format and are imported as hyperstacks using Bio-Formats. I then split the color channels and work within the red channel only. Using duplicates of the foreground and background images, I apply a Gaussian Blur of sigma 2 to each foreground image, then I use the image calculator to subtract the background image from each foreground image. I then use Otsu threshold and convert to a mask that I redirect to the original spotted images.

Attached are the images after choosing the red channel but before any further processing. I’ve also included the resulting areas that I got.

Could anyone provide some ideas for producing two resulting (background-subtracted) images that have the same ROI spot areas?

Thanks,
Mary Beth

Hi Mary Beth,

I think that there is very little you can do to completely eliminate this problem - what you are observing is the ‘noise’ of the camera/your imaging set-up. No two images will always be exactly the same, even if taken under exactly the same conditions due to small intrinsic variations in the chip readout, small fluctuations in lighting, etc that are beyond your control. You can easily illustrate that the two images are not the same by subtracting the two images from each other (see below).
So, you can’t really expect the measured area sizes to be exactly the same. Your two measurements appear to differ by about 1%, which is probably a reasonable estimate of the intrinsic error that you have to expect with your current setup (and doesn’t seem too bad, but that obviously depends on what sort of differences you try to measure).
You may be able to improve on the error/noise, but if you want to try to do that, I would start looking at the imaging setup rather than the analysis method, i.e. stability of lighting, camera readout noise, etc.
Alternatively, take a series of images under the same condition and average over them before doing the analysis. I would expect that taken two series of let’s say ten images each, averaging them and then analyse the two averages of the series would result in a reduction in the differences between your measured spot sizes.

Hope this gives you some ideas.
Regards,
Volko

Difference between the two sample images:

Hi Volko,

Thank you so much for the quick response!

I’m going to try some more thorough testing to try to optimize the lighting and camera settings as best as I can. I think the averaging of multiple images is a good idea. That would definitely help me feel a little more confident in the final result that I get. Would I need to stack all 10 of the images and then use Z project (average intensity projection) for this?

I really appreciate your help!

Best,
Mary Beth

I think you can be confident in your results even if you just use your single images. You just need to be aware that any measurement will have a certain error attached to it. There is no such thing as a perfect, absolutely accurate measurement. The level of noise/error determines the level of difference that you can distinguish between two images. In your case, if the expected differences in your spot sizes are let’s say 10% or larger, the level of variability you see between your images wouldn’t be a problem. However, if you are trying to detect changes in spot sizes of 1% or so, it would become a problem as any changes at that level can occur due to the noise intrinsic to your measurement system. That is why the signal to noise ratio (SNR) is important when considering any measurement.
There are various ways to characterise the SNR for an image acquisition system. For an example see NoiSee (https://imagej.net/NoiSee) and the associated publication. This has been developed for confocal micoscope images. However, I think you could use the tools to characterise your system.

In order to average a series of individual images, I would convert them to a stack and then use the Z project option - it is probably the most efficient way of doing it.

I don’t know how you acquire your images, but perhaps your camera/software already has the option to acquire a time series of images, in which case I would use that to acquire a series of images which would probably already be saved as some form of image stack.

Good luck,
Volko

Hi Volko,

Sorry for the delay in response and thanks again for the suggestions!

I’ve worked more on trying to standardize the lighting and have also moved towards using a optical density step tablet in my images in order to standardize the lighting with a reference in each image I capture. Averaging several images together before processing also seems to be helping.

Thanks again for your help!

Best,
Mary Beth