How can I use Image J to determine fluorescence enrichment around an object in an immunofluorescence image?

Sample image and/or code

D69_OPAD_CFSE_aGc647_NE555_1006.tif (4.0 MB)

  • Upload an original image file here directly or share via a link to a file-sharing site (such as Dropbox) – (make sure however that you are allowed to share the image data publicly under the conditions of this forum).
  • Share a minimal working example of your macro code.


  • What is the image about? Provide some background and/or a description of the image. Try to avoid field-specific “jargon”.

These are images of human cells (neutrophils) and bacteria (Neisseria gonorrhoeae). In these images, the following are immunofluorescently labeled:

  1. total bacteria (both inside and outside)- labeled with CFSE - appear green
  2. bacteria outside of the neutrophil - labeled with an antibody - appear pink
  3. neutrophil elastase, a protein in the neutrophils - labeled with an antibody - appears red

In some images, the green bacteria have red neutrophil elastase staining around them, and we call this “enrichment”. When the area around a bacterium is enriched, the red staining creates a halo that is at least 50% of the circumference, and it is tight to the bacteria (we sometimes compare it to looking like the bacterium is in a “coffee bean” of red staining)

Analysis goals

  • What information are you interested in getting from this image?

I want to preface this by saying I am very new to using Image J or any image analysis software - let alone using masks successfully - and I also have no experience coding for image analysis - so please keep that in mind when responding to my post.

Our main goal is to determine – for each intracellular bacterium (green, but not pink), is it enriched with neutrophil elastase (red)?
We would then like to enumerate how many are enriched (positive) and how many are not (negative), to get a proportion.

As far as I can tell, this involves at least doing the following:
• Determining which bacteria are in focus enough to analyze accurately
• Discriminating between intracellular and extracellular bacteria (using the extracellular stain, pink)
• Instructing the analysis tool to compare the red staining directly surrounding a bacteria to what is immediately surrounding that and determine if it is enriched
• Repeating this process for many bacteria and many cells over many images in a set.


  • What stops you from proceeding?
  • What have you tried already?
  • Have you found any related forum topics? If so, cross-link them.
  • What software packages and/or plugins have you tried?
  1. The primary challenge with these images is the high background of the red staining in the neutrophils - it is diffuse throughout the cytoplasm and membranes. Occasionally there will be spots or entire regions of high intensity and enrichment that aren’t near a bacterium. Therefore it really isn’t possible to use just the intensity of the red staining to determine if a phagosome is positive for red enrichment or not. It’s possible that for accurate analysis, some bacteria, despite meeting the criteria of being in focus and intracellular, will have to be excluded because they reside in neutrophils that contain red staining that is too intense throughout, making a determination of enrichment impossible.
  2. We have tried analyzing by hand- but we end up with vastly different results depending on which individual is analyzing. Therefore, we need an objective, unbiased analysis tool.

I don’t have a workflow for you, but I will point out a couple of things.

  1. The image you hosted is RGB, hopefully you still have the original microscope images as they will be far easier to use (should be one channel per, well, channel).
  2. Your images appear to either have chromatic aberration or have been moving between channel images, as most of the green signal appears to be 4-5 pixels to the left of very similar red or purple staining. Looking around a bit more, it looks more like drift between images since the orientation is not 100% consistent.

And if it is sample drift, it might be almost impossible to correct for since it is not based on any hardware issue, other than the images being taken too slowly.

Hi MicroscopyRA,

Thanks for the response! I do have the original images of course, but I uploaded a composite. We are aware of the shift - we suspect it is a technical problem with the microscope itself, since the samples are fixed. Will this be an obstacle for image J based analysis, and is there any way around it?


1 Like

If it is an optical problem, the shift is usually N pixels in the X direction and M pixels in the Y direction, whatever those values are, and can be corrected for by adjusting those channels.
Chromatic shift origins measurement and correction - ImageJ Step 13
ImageJ - Pixel shift/image registration

You might be able to use fluorescent beads or an autofluorescent sample like convalaria seeds (fairly common in core facilities) to test the microscope across it’s field of view. If the shift is not simple, then it might take some testing to isolate what is happening.

Side note, just because the samples are fixed, does not mean they are not moving. I have had plenty of samples of fixed cells where the ratio of cell/media to mounting media resulted in “wobbly” samples (usually in samples where the user wants to avoid cytospin due to the damage and distortion of the cell).

If you cannot fix the pixel shift issue, I don’t think there will be much in the way of an automated pipeline that will work for you without extreme assumptions. Something like thresholding each channel and “if an object of channel X is within Y microns of an object in channel Z” sort of logic… which tends to give very shoddy results.

1 Like

Thanks for the advice. I hadn’t thought about the possibility that the samples are moving. If that is true, the shift probably wouldn’t be consistent throughout images…could TransformJTranslate or another plugin help me determine what the shift is for each image automatically? Or can they only help me adjust each channel after I’ve determined what I want them adjusted to?

Basically at this point, I’m trying to decide whether these images are worth any more investment of my time, or whether I should invest my time in developing a different approach/experiment to answer my question. So I guess my question is - if I am able to correct the shift, is there an automated pipeline that can handle this type of question, given the significant background on the red channel? Or will it be impossible to know until we’ve tried?

Thanks for your help!

As with most things, it’s impossible to know how accurate it would be until you tried… and generally figuring out how accurate the pipeline is requires a ground truth. If you are having trouble with various people being unable to agree on a ground truth, that is a significant problem even after you have a pipeline.

I don’t know of any good ways to determine the shift automatically unless the shift is consistent across the image. In the case of the image posted, it was not consistent across the image so I don’t think there are any great ways of correcting for that kind of shift without additional experimental input (like beads mixed into the sample that fluoresced in each channel). Faster exposure times might help, but I don’t know what instrument you are using. Often there are options that allow very fast imaging (Quad cubes for brightfield, utilizing multiple detectors at once for confocal) with some tradeoffs in bleedthrough that need to be dealt with in postprocessing.

Or if it is sample drift, and your samples are fixed, adjusting the mounting media might solve the drift problem, leaving only the background problem. Not sure how much staining troubleshooting you have done there, various blocking buffers, fixation methods, etc.

If the problem is 100% always large blobs of red, you could also size threshold out any large enough blobs of red, as long as lots of small blobs of red don’t overlap too often :slight_smile:

This gives me a lot to think about…

In terms of a ground truth - there are agreements on individual positive/negative calls. The problem comes in because, as you can imagine, this process of counting by hand takes a lot of time and effort, so we a) count a set number and b) don’t all count the exact same bacteria. We expected that if we could get an automated count, it would increase the n and count all bacteria in a field, thereby increasing the power and removing any bias that comes from what we have chosen to count. I’ll have to consider whether the effort would be best spent figuring out the shift and automation pipeline with these images and this technique, or best spent trying something new. Thanks for all the advice to help me make a good decision!

1 Like