How to correctly measure fluorescence intensity in heterogenous samples

Hi all!

Lately i am trying to coniugate Fiji and CellProfiler to create a workflow to quantify fluorescence intensity in a big batch of images from different experiments.

So, to be clear, i work on zebrafish and i want to quantify the fluorescent signal in different areas of the zebrafish itself a different timepoints. Beside the different timepoints, i have other experimental parameters that i change on purpose.

Each different larvae is imaged alone on an EVOS FL auto in 3 channels: Transmitted Light, GFP and mCherry. So, i manually place each fish and change channel to acquire the image. Every fish will have 3 images with the 3 channels separated.

To try to give a logic to the process, i kept the light and gain parameters respectively to maximum and minimum, changing only the exposure time. To change the exposure times i sent the signal to saturation, where red pixels appears, and slowly decrease the exposure until the saturated pixels disappear.

Already on this aspect, i would like a feedback from you, to know if i use the right approach in acquiring the images.

To analyse the images i am trying to decide if it’s better to measure the intensity or the area of positive signal using threshold.

For now, i use the second method in CellProfiler, where i use a semi automatic method to select different area of the fish and measure the percentage of positive signal in said area (one of the aims of the experiment). So i have positive area (from thresholding) over total image area (that is always the same)

The problem, with both methods, is the heterogeneity of the images. That’s why i was thinking about creating a little script in fiji to normalise the images, but i am a little lost on which approach to follow.

Do you think the thresholding method is a good way to proceed and how you would normalise the images to make it more reliable and reproducible?

Thank you for the long read and the help!

Hi @Valerio_Laghi,

Your questions make sense, but without some images to get a good view of what you are describing, it makes helping you more difficult.

The question here seems to be: Are you satisfied with just knowing whether the signal is there or not (threshold) or are you interested in knowing how much signal there is (intensity measurement).

What biological question are you trying to answer with this experiment? From there and with a couple of images, we may be able to start giving ideas and opinions on how best to proceed.

Best

Oli

Hi @oburri,

Thank you for your answer. I will load during the day some images (at this moment i can’t unfortunately).

Actually, i work on virus propagation, so my biological question is “Over time how the virus move in the body?”. This mean that i measure the presenze of the signal at different points in different areas (for example inside or outside the cns) and model the propagation.

This mean that the thresholding method should be enough, especially considering that in this step of preliminary screenings i cannot deduce if the increase in relative intensity is correlated to an effective increase in virus quantity or is the accumulation of fluorescent signal for the presence of the virus in the same x,y but at different z, so in different tissues.

Best,
Valerio

Hi @Valerio_Laghi,

I think that for a time lapse experiment you might anticipate that the final time point will have the greatest intensity. To obtain usable data you’d therefore need to keep the exposure consistent across all samples, so perhaps be careful to not expose the initial timepoint so much that the final timepoint will be saturated. If done correctly you should have no trouble measuring both area and fluorescence intensity. If you’re concerned about plane of focus it may be worth acquiring z-stacks depending on whether the volume of images you end up with will be manageable.

Regarding the analysis, thresholding should be fine to minimise the impact of autofluorescence while detecting your signal. I’m not sure that trying to normalise the images is going to be particularly effective, the variability in this model is well-established and may be best addressed by using greater numbers of fish per condition.

Am I right in thinking that you’re manually segmenting your fish images to inspect specific regions, and is that being done within CellProfiler itself?

1 Like

Hi @DStirling,
Your assumption is correct and i tried to follow your logic. Unfortunately, the signal in the earlier timepoints is so weak that is not possible to use the same exposure used to acquire it in the last timepoints avoiding saturation. Plus, i cannot really lose that first signal or it will compromise my model. That’s why i tried to keep at a fix value the other camera settings and change only the exposure, hoping to normalise the values measured to the exposure time.

Would be much easier to use a high throughput machine with z-stack, but i even had problem to explain my boss why saturation is bad, so, for now, this is off the table. Plus, would be a little overkill for a preliminary screening.

I know that for zebrafish the normalisation cannot do wonders, but can still reduce a little of the variability generated by the acquisition method (i will provide image as soon as i can).

In CP, i identify the signal as primary objects and, working with masking and object edit, i manage to segment the signal inside and outside the cns.

Here is my pipeline: TR:INT mask w:GFP-mCh edit Definitive.cpproj (2.4 MB)

So, in the end i would like to make my workflow as consistent, solid and reproducible.

If I understand correctly, and you have changed the exposure time, any measurements you have regarding intensity (threshold, intensity sum, mean, etc) will not be comparable between images. Ideally, all experimental variables need to be kept as constant as possible, though that can be difficult with large objects that may not be the same distance from the objective, like zebrafish. Orientation can also play an issue, due to the depth of the zebrafish larvae you need to image through.

Edit: Ah, I see @DStirling already raised this problem, great! Note that your intensities don’t need to be visible to you (that is a look up table issue), only to be measurable against whatever background you may have (signal to noise ratio). If your SNR is still too high, that is likely something that can only be solved on the hardware/sample prep side.

1 Like

I agree with others that measuring absolute intensities if you’re changing exposure times is unlikely to be the best strategy; thresholding for whatever the brightest area(s) are within your sample and tracking their positions seems reasonable.

Have you considered incorporating some sort of standard, like gold beads, next to your sample to have something quantitative to normalize your samples to? It might take trying a few things to find something that has a comparable level to your signal, but it might be worth it if you want to be able to recover at least semi-quantitative data.

2 Likes

Hi @Valerio_Laghi,

Thanks for sending the pipeline. It may be worth seeing if you can still pull out signal at the early time points which is not visible to the naked eye. Signal can still be present and detectable at the lower end of the scale. If that’s not possible, one approach might be to image at both low and high exposure at each timepoint. I would still expect that images taken with different exposure are not comparable. In terms of normalisation, I think that trying to adjust the signal in this way when working in a model with such variability is likely to create more problems than it solves. As Beth mentioned, a standard to normalise to might be a good way forward if you do wish to pursue that, but applying standards to fish embryos might be tricky.

Looking at your pipeline, it’s difficult to see exactly what’s going on without some sample images, but I think I understand what you’re doing. At present it appears that you’re measuring intensity on images which have been modified by the enhance/suppress module, which probably won’t give you reliable data. If you’d like to measure absolute intensity the best approach is probably to use the enhanced features to detect objects, then evaluate the intensity of those objects within the original image rather than the modified one. I’m also not sure how well adaptive thresholding would work against embryo autofluorescence, is your signal distinct enough to eliminate this form of background? In my experience there can be particularly high yolk sac fluorescence on some days of development, especially in the GFP channel.

1 Like

Sorry, i should really learn to clean my pipelines.
The idea is to measure the intensity only on raw images, while for the threshold i use the modules to clean the images and enhance the edges.

You are absolutely right, the yolk and eye autofluorescence is a nightmare, that’s why i manually remove from the identified objects their signal. The other edit is to create a mask only for the cns.

As soon as possible i will give all of you a cleaned pipeline with annotations.

I thank you all for the help, i am learning by myself and i am going for trial and error.

@bcimini idea is really good, just discussed it with my PI, that is searching for the right kind of beads.

@Research_Associate yes, the problem is that with the lower signal, the SNR is really high and even the autofluorescence of the larvae start to be an issue.

We just started using Zebrafish as well, and the primary problem wasn’t the autofluorescence or even orientation, it was that their signal was far too low with the initial dye (injection of dyed cancer cells). Once they swapped to a different dye, the signal part of SNR grew to the point where the auto-fluorescence essentially “disappeared.”

Again, I don’t know about the experimental details of what you are measuring in the GFP and mCherry channels (maybe it is actually GFP and mCherry? :slight_smile: ), but if you can enhance the signal with a stronger promoter, or swap to a better dye if the cells are injected, that might be a good option. The autofluorescence influence is very relative to that Signal!

Widefield analysis will also naturally have more background, so if thin confocal slices are an option, that might be a way tease out your signal.

Side note, this is all from the perspective of widefield LED excitation tests with a quad cube. I am not sure how your cube set might affect your background signal in this case.

1 Like

Hi, i
am actually measuring GPF an mCherry produced from the virus itself during the replication, so the quantities are really low and we cannot change dyes.

It work pretty well with confocal acquisition (that i know far better) and 3D reconstruction, but in widefield i am stumbling in several problems. The best thing would be to acquire an acquifer machine, that is perfect for it, but this is not really my call.

My widefield is just an evos, so i am pretty sure that it’s a pretty basic machine. ahah

Is there any reason you can’t continue using confocal? In my experience the EVOS auto line aren’t as sensitive as some other widefield systems and can struggle with noise.

Anyway, if you’re able to upload some images later that’d be very helpful.

1 Like

Several, some more relevant, some less.

  1. My boss like the idea of having a widefield screening of an high number of fish, to determine which timing or area is better to investigate with the confocal

  2. I need to imagine the fish for 3 days after infection and oddly, under the confocal, they die faster.

I agree that with our confocal would be far more easier and reliable, but, as a new phd student, is not my choice to make.

I will give the photo tomorrow for sure.

1 Like

That’s fair enough, hopefully it should be feasible to do a preliminary screen with widefield. If you’re going to use the EVOS I’d take care to make sure it’s in 16-bit mode, rather than 8-bit which was the default on the model I’ve used. This will give you a wider dynamic range and could help pick out weaker areas of signal.

I don’t think widefield is inherently unreliable for this, with the right parameters you should be able to get the measurements you need.

Full disclosure: My previous job involved developing a competing system to do these sorts of fish assays.

1 Like

That’s the same reason we started using the widefield as well, in addition to the hourly cost. The widefield did require higher signal, though, and thus the change in dyes. You may need to look into other, brighter proteins (there are a few preprints or recent publications in brighter FPs, though I’m not sure how accessible they are).

Not surprising that they would die faster with confocal, as you are likely needing to turn the laser power up to the point where phototoxicity is becoming a problem given that the widefield signal is so weak.

Depth can be another significant factor, as the fish were originally suspended in agaros, which was something of a mistake. You can create divots with the right tools which generates inserts that the larvae can be inserted into. Not only does this help with alignment, it gets the zebrafish closer to the coverslip. Probably won’t help with GFP vs autofluorescence noise, but it may help with other sources of noise and allow you to pick out your signal. Taking a tiny Zstack to help with widefield deconvolution may allow you to pick out your signal as well.

1 Like

Hi,
Here is some example of my image! Sorry for the lateness!
_0000 is TL _0001 is eGFP _0002 is mCherry
Ok, these are from the same fish at 24h with SPC infection
01_0000.tif (2.3 MB) 01_0001.tif (2.3 MB) 01_0002.tif (2.3 MB)
Same fish 168h later
01_0000.tif (2.3 MB) 01_0001.tif (2.3 MB) 01_0002.tif (2.3 MB)

This is 24h Intravenous:
01_0000.tif (2.3 MB) 01_0001.tif (2.3 MB) 01_0002.tif (2.3 MB)
96h later
01_0000.tif (2.3 MB) 01_0001.tif (2.3 MB) 01_0002.tif (2.3 MB)

Hi ! I’m kind a big noob concerning your problem but had an idea concerning the localisation of fluorescent aeras . Z stacks are fine but generats masive datas and seem overkill. But maybe you could each time take two images of the fish one from the side and one from the top/bottom ( in a perpendicular plane to the fisrt ) so you could then atribute XandY localisations to spots in both images and so localise them in the fish. Only problem i would see there is a way to take the image from this other plane reliably, how to turn the fish in a way you can reproduce it in exact the same way to be usable ?

I don’t know if that idea is new to you or if it is feasible but i wish you succes in your project !