On September, the 29th, was given the NEUBIAS Academy webinar “Deconstructing co-localisation workflows: A journey into the black boxes”.
Video of the webinar: on the Youtube channel of NEUBIAS.
Slides: on Fabrice’s GitHub repository.
More than 400 persons attended the webinar. We are adding to this thread all the Questions & Answers collected during the webinar. Answers given by moderators are preceeded by @ references, additional information/discussion by @Fabrice_Cordelieres are given in italics.
Enjoy and post any missing question here!
Table of contents, Part 1
- Checking data integrity
Q1: Can checks for bleethrough be done with control samples? sample only with one dye and sample with the other dye? Do you mean I should have one dye in one sample and then I should compare?
Indeed ! It all depends on what you call “control samples”. By control, I mean the same type of sample as the one on which you are quantifying co-localization (sample 1), except you will have to mono-label them (samples 2). You may also want to have a non labelled sample (sample 3). First you’ll image sample 1, setting what seems to be appropriate imaging conditions. Then you’ll have to image samples 2 under same acquisition conditions. If everything goes right, you won’t see any noticeable cross-talk/bleethrough (bascically, no signal in other channels than in the one where you expect to see signal). In case you detect signal where it is not supposed to appear, you’ll have to tune your acquisition parameters or change the way you prepare the sample, for instance opting for fluorophores that are more spectrally distant. However, keep in mind that when shifting towards the far red part of the spectra, the resolution will be lowered…
You may ask: “what about sample 3 ?”. Good question ! This sample will allow you to check that the signal is not corrupted by any endogenous fluorescence (autofluorescence).
And finally: of course this type of control sample is not the only one that should be prepared. Always make samples by alternately omitting each of the primary antibodies, just to check that you don’t have a cross-reaction between your secondary antibodies…
Q2: Can we simply trust the facility about the resolution of the microscope we plan to use or is it critical that we should define it before we start the assay?
@lagache: Yes, trusting the facility is the easy way… Because assessing the resolution with nanometers beads is not easy
Well, I agree you should trust your Imaging Facility… or at least ask them if they have performed this type of tests for the specific imaging configuration you are using. You may find a protocole on how to prepare reference samples and how to analyze them in the documentation of the MetroloJ plugin. In the specific context of colocalization, some protocols have also been described in P. Mascalchi P & F. P. Cordelières, “Which Elements to Build Co-localization Workflows? From Metrology to Analysis.” Methods Mol Biol. 2040:177-213, 2019..
It might be hard to get the control images published, even as supplementary figure. You may however decide to make your datasets, together with the control datasets available through dedicated platforms such as Zenodo. Therefore, an easy way to “show” the control images might be to share a link to your original data in the paper
They are MANY algorithms to restore 3D images. It would be recommanded to check if the one that performs best for you is conservative (i.e. the total intensity before and after is the same). Some parameters have to be set and we’ve described one way to set them (at least the number of iterations for iterative algorithms) here. Please keep in mind that most of the available tools are considering the unitary deformation (PSF: point spread function) to be the same all over the sample… which is a huge approximation. Image restoration is really usefull, but only an estimate of what your actual sample might be. Always have a critical view at its result and have a precise look at your images, and look for actefacts: a structure that is visible on the deconvolved image should at least be present as “seeds” on the raw image.
Q5a, @romainGuiet: you should be safe but you still need your single stain controls to assess the bleedthrough
Trying to get spectra appart is one thing that should be considered. But you also have to keep in mind that if you use longer wavelength dyes, you’ll end up with a lower resolution, increasing the chance to get co-localization: it’s all a matter of compromises.
When dealing with image analysis, it’s always best to be as close as possible from raw data. The first measure to take is trying to improve the sample preparation and image acquisition. Then, if you still can’t reach a clear separation of your dyes, you may turn to post-processing. Keep in mind that spectral unmixing is making some math on the images, based on references you’ll provide. One assumption you’ll make is that the cross-talk/bleedthroug is the same everywhere. However, depending on the fluorophore’s environment, this might not be totally true. As a consequence, you may “over” re-attribute signal to a channel, ending up with negative intensities on the other channel (oh gosh, we’ve just generated anti-light !!!).
At least two ImageJ/Fiji plugins exist: LUMoS Spectral Unmixing (Learning Unsupervised Means of Spectra)and
Spectral Unmixing Plugins
Q6a: In a image processing software they always ask for a threshold. Can this step replace the background correction?
Q6b: In different plugins such as coloc2 it is highly recomended to subtract background. Which method do you recommend? Rolling Ball radius?? Thank you!
Threshold and background correction are not the same ! When you perform a threshold, you only consider pixels that have an intensity within a certain range. If you then compute the minimum intensity of the thresholded pixels, the retrieved value will be equal to the minimum threshold you’ve set. When performing background correction, you subtract a unique value (ex: using the Process>Math>Subtract function from ImageJ/Fiji) or an image which is an estimate of the background (rolling ball algorithm, Process>Subtract background). In this case, the minimum possible intensity over the image will be zero. In other words, with a threshold, you select pixels and keep all the raw intensities, with background correction you set the baseline to zero.
About the rolling ball correction, an estimate of the background is generated and used to correct the baseline. In case all the objects on the image are of same size and isotropic shape, you’ll have no problem finding a proper radius for the filter to work. For non isotropic shapes, a parameters exists to take this property into account. However, when dealing with objects of different sizes, in case the radius is chosen too small, you’ll start subtracting intensities within bigger structures.
Weither you use threshold (any method) or other ways to correct the baseline, you should ALWAYS care about control samples !
Q7: so AI-based denoising + colocalization is unadvisable? alternatively: segmentation with denoised -> colocalization with ROIs at raw images?
Nothing is forbiden, as long as you caracterize precisely the impact of the processing methods you are using and make sure the conclusions you draw from them is not a side effect of the processing itself! If using innovative methods that have not yet been explored, the work to be done in terms of checks and controls might be heavier as compared to the use of “well known” methods. In any case, always document the full processing (the macro recorder from ImageJ/Fiji might help), make your workflow public (through GitHub,) make your datasets available (through Zeonodo and do not hesitate to reference both in BioImage Informatics Index.
Q8a: How do you choose the plane that you want to colocalize? in many cases you have cells that are cut out of the stack in the middle. so automatic analysis cannot be done
Q8b: When you are looking at a single slice in a confocal stack for co-localization, would you still recommend using the 3D especially if the slice is very thin say 0.42 um? Also, in FIJI, how do you see the 3D?
Q8c: “Biology is not flat”, so what should we keep in mind when doing colocalization in 3D? (Clearly deconvolution is very useful in this case to correct for spherical abberations, and also sample drift + chromatic shift)
Q8a, @romainGuiet: You should not choose! Analyse 3D first to check if you’re “quantifier” is sensitive to 2D.
Q8b, @romainGuiet: From a 3D stack , you calculate your quantifier on the overall stack. Ideally you should check that your results is not sensitive to 2D.
Q8a: I totally agree with @romainGuiet: in case you’ve acquired data in 3D, it probably means your problem can’t be summarized as 2D. So why would you want to extract only part of the information, running the risk to bias the data ?
Q8b: If your acquisition in 2D, there is unfortunately nothing you could do… except go back to the microscope and re-acquire the data. This is of course true only if your sample/region of interest is not encompassed into your 2D acquisition.
Q8c: True: deconvolution can help. About things to keep in mind… well, it all depends on the method you are using. For Pearson and Manders you analyze pixel wise, so there is not that much you have to do. When dealing with object-based, when summarizing the objects as single spots, the disparity of resolution should be taken into account. Let’s imagine two centers in the same plane or one on top of the other, both having same (x, y) coordinates, the reference resolution is the xy resolution or the z resolution respectively. This means that if the distance between the two spots is below the optical resolution they do co-localize (or, more precisely, knowing the current resolution, you can’t exclude that they do co-localize). Now imagine the two spots close one from the other, but one slightly off-centered as compared to the other: what reference resolution would you take ? The xy resolution ? The z resolution ? In fact, you would have to take into account both resolutions, in a weighted fashion, depending on the orientation between the two dots. This is what the JACoP plugin does (see the ImageJ conference 2008 paper for an extensive explanation).