What pre-processing should happen on deconvolved images used for colocalization analysis?

Hi everyone,

I’m teaching myself to do colocalization quantification for revision to a manuscript, but I’m very much out of my depth. After reading the Colocalization Analysis and Colo2 wiki as well as “A practical guide to evaluating colocalization in biological microscopy”, I have a very basic question on what pre-processing/thresholding is needed for my particular data in order to calculate MCC values. In case this is of interest, the two channels I’m using for coloc2 are (1) nanoparticle fluorescence and (2) cav1 protein fluorescence.

I have super-resolution images of cells acquired on a DeltaVision OMX microscope. The images were processed using the scope’s software (softWoRx) to perform structured illumination reconstruction and camera alignment. The result is a 32-bit image stack that I have saved as a tif file. I’ll be honest, I don’t understand this preprocessing outside of the “big picture”, but it is my understanding that this is comparable to deconvolution processing.

There is very little background after the SI reconstruction, and so I am wondering if I need to do any further processing, such as Costes autothresholding, when using Coloc2, or if I should only look at the original Mander’s coefficients?

I appreciate any help or advice you can give.

@scor

So - I’m not an expert-expert on colocalization… for that - I would refer to @chalkie666 for some better insight…

But what I would say - if Manders’ Correlation Coeffients are your measure-of-interest… the main pitfall for MCCs is a zero intensity level is the only criterion to distinguish background form signal - so background reduction is a huge issue. Global thresholds can be set to assist in this non-zero background level issue - but keep in mind that MCCs will vary greatly depending on those set thresholds.

Could you share an example dataset with us here to better help?

You could also try out a new colocalization package - it’s in R - that will help by providing some stats… was recently published (global method and pixel-based method). Take a look at the R code here: https://github.com/lakerwsl/RKColocal

1 Like

Yep, and to add to that, the threshold sometimes needs to be biologically sound, not just based on what the detector detected (pixel values… and negative offsets to the detector). That’s why, in the case of secondary antibodies, the secondary control is so important. In your case it doesn’t sound like that would be important for the cav1 fluorescence, but I’m less sure about the distribution of your nanoparticles. For example, is inside vs outside of cell important? If 50% of your nanoparticles are not entering the cells, you could be measuring permeability or uptake, rather than colocalization with another intracellular signal.

Another consideration is that Manders takes into account pixel intensity (unlike Pearson’s which is “on” or “off”), and I know that certain types of superresolution, like AiryScan from Zeiss, can do crazy “mathy” things to the intensity values, which makes the Manders coefficients somewhat suspect. I don’t know the specifics of how the processing for the OMX system works, but if you get pixel distributions like the ones shown in the link, you may have to take exact pixel values with large grains of salt.

2 Likes

Thanks for your advice. That’s definitely why I am nervous about doing this analysis in a way that is correct.

I’ve attached a slice of one of my source Z-stacks with the two channels I’m comparing.

Some advice I got from someone at my imaging core facility was to normalize the 32bit data. He suggested getting an average background value (from a small ROI distant from the cell) and a max value (from the whole image) from each channel after a max intensity projection of the source. I am supposed to go back to my source hyperstack and set the min/max for each channel using the B&C tool and then convert the image to 16bit. Though, when I do that the converted image looks like a psychedelic tie-dye shirt.

My R is a bit rusty, but I will take a look at this package as well. sample2_PLD_OVCAR8_Cav1_002_SIR_ALX_FUS-2.tif (8.2 MB)

Thanks for the advice - I wouldn’t be surprised if this were the case. What software did you use to generate those scatter plots? I’d like to use it to see if my data has the same issues.

In this case, how are these super-resolution scopes supposed to be used for this type of analysis? If my images are like this, does that mean new images would need to be acquired to properly do this type of analysis? I do have standard confocal images, nothing fancy without any deconvolution, but I wonder if they would be more trustworthy sources for quantification?

I’m afraid I’m not sure. I have seen more problems than solutions, I’m afraid :slight_smile:

The plots were generated in Zen, which shows colocalization as a scatter plot similar to what can be created in FIJI:
https://imagej.net/File:BadOffsetConfusesCostesAutoThreshold.png
The picture in the lower left shows a similar scatter plot, though for the whole image rather than just a small ROI.

I suspect that, at the moment, people are just ignoring the problem and proceeding with the colocalization analysis. There are some… interesting emails on the confocal listserv recently on the subject that show just how varied people’s interpretations are. I have also seen arguments that some kinds of deconvolution are much better for colocalization analysis… as long as the light is reassigned correctly based on the objective, chromatic aberrations, etc etc.

In general, hopefully the size of the effect you are looking for is large enough that it dwarfs the variations in sample signal caused by the image processing. Obviously those Airyscan pixel values aren’t “exactly correct,” but they also aren’t so wrong that they generate some kind of weird picture. And even without processing there are plenty of other issues when you get your pixel size small close to or beyond the resolution limit. If your focal plane for the two different colors is slightly off due to chromatic aberration, you might not be measuring where you think you are, even if you are measuring what you think you are! Certain objectives like Plan-Apo are going to be much better for this, but even then I wouldn’t really trust super-resolution colocalization of far-red and UV.

I realize I am mostly heaping on problems rather than solutions, but if your colocalization results are “strong,” it probably is not a problem. As pixel size gets smaller, though, the amount of slight shifts you can tolerate gets far smaller. Single pixel shift between two wavelengths can make a big difference if your objects are very small.

Side note on the previous post, using the “max” value is very dangerous as you are relying on the stability of a single pixel (even if it is all the way through a Z stack). If one sample has a slight bit of dirt or ball of something, it might appear FAR brighter than anything you are actually interested in. I would take a look at the max values of a bunch of your images and see how stable that is (again, no experience with OMX, so maybe it will work great).

1 Like



I only checked a couple of areas in your image, but while I do kind of see some loops, they don’t look nearly as bad as in my previous image.

Slightly larger area including one of the previous areas. Still looks fairly random, and I’m not seeing any consistent “one color slightly down and to the right of another color” type of problems either. Though that might have been already compensated for, as there is no green at the top of your image, and no red at the bottom, indicating that a pixel shift already took place.

This sort of thing in the “faint signal” areas does look a bit over-processed to me though.

And to add one more thing, wince while your actual background might be very low, another question you have to ask is what part of this do you really want to analyze. For example I am guessing that the circled area is not really weak expression, but instead areas that are going out of focus.


Even though this is one slice out of a 3D image, every voxel you measure is going to count, so if your threshold is zero, those will be included.

Speaking of voxels, I was just talking about this with my boss for a completely different reason, but how large are your voxels? And are you looking for objects being in the same general volume?

Calling all 3D printers: It would be neat to have a 3D printed semitransparent point spread function that contained “floating” proteins of various sizes, to demonstrate what colocalization means using different modalities.

1 Like

These are all really good points. I think this is maybe more than I can reasonably figure out in my current timeframe for revisions. I did get a normalized hyperstack to work with, but the MCCs I see from coloc2 look questionable.

I’m currently using a mask of the thresholded channels added together, and I’m wondering if coloc2 uses the white pixels as the area to analyze? When I look at the documentation, the areas with white pixels are blacked-out in the single-channel MIPs. In which case, there really shouldn’t be any pixels to analyze so I don’t get how it came up with values.

I’m thinking of maybe opting for a more intuitive and simple analysis like what was discussed in this thread Counting black and white pixels

This person recommends just calculating a fraction of overlappping pixels, which seems safer to me than the coloc2 black box at this point.

As for voxel size, would that be related to PSF? I could ask the microscopy core if you’d like.

1 Like

Sort of. It’s also dependent on your step size between frames of the Z stack, camera pixel density or pixel count for a point scanner, etc. PSF tends to limit the smallest the voxel size could reasonably be.

Still nervous about that normalization. Mask of the sum of the two channels is probably good, though works best if the two channels are very similar in intensity. Sometimes it is better to do 1.2X+1.0Y or something like that (or else you are thresholding out one channel more than the other at low intensities). How you determine that number though? Heh. Maybe go off of the peak locations in the pixel distribution.


image
The negative pixel values are kind of weird though. Not sure at what point in the processing those started showing up. They also might cause weirdness in any MCC values that were calculated if a threshold isn’t set.

Oh, and the reason I asked about the voxel size is, with some superresolution techniques, you might be getting down towards the size of your nanoparticle. Comparing that volume with the volume of your PSF might give you an idea of whether you really should be looking at colocalized pixels… or maybe touching pixels of different colors?

I’m seeing 0.04/0.04/0.125 in the metadata which is about what AiryScan gets, which isn’t pointillist. Seems a bit large, so I’m also wondering if you have actually maxed out your resolution. Also, 0.04um is 40nm. Which is huge compared to a GFP tag (2x4), but maybe not so big compared to a 20nm diameter quantum dot. I guess my point is, if the quantum dot can take up most of a pixel, how much room is really left for what you want to co localize it with?

1 Like

The NPs here are 100 nm. Although I didn’t know about this before, I was worried that this kind of analysis would be hard, in some images I literally see NPs in a caveolae “cup” so I was skeptical this analysis would capture that.

Is there a type of analysis you know of that would let me consider overlapping and touching pixels?

Yeah, nearest neighbor type measurements. Which I only know how to do using Imaris, so… might not be much help to you :slight_smile: But essentially, if you can segment each channel into “spots” (spheres), you look at the average distance between the center points (cheapest calculation) and their nearest other colored neighbor. You can then compare the average distance or distance distribution between two samples. Or given a certain volume, you can create random distributions, and see if your two objects are more closely related than you would expect from completely random.

I am guessing based on the single image that your green channel is the NPs then.

My imaging core has Imaris! Do you have any literature/tutorials for that I could look into?

Nope, but if they have a service contract, the representatives have always been very helpful. Or maybe your imaging core has someone who knows Imaris well, or knows someone who does. If you want to find a way to host a full Zstack on GoogleDrive or something I might be able to help further, but wouldn’t want to try flailing about trying to precisely describe a 2D/3D process.
Essentially though, create spots for red, spots for green, and if you have an updated version of Imaris, make sure calculate shortest distance checked.
image
I’m not sure how spot-spot distances actually work as I have only done spots-surfaces before. You might need to create spots for one channel and surfaces for another (pseudospots). Not 100% sure.

Side note, what colocalization you were problably seeing so far was probably mostly along the Z axis, then, as that was ~125nm