Quantitative assessment of protein colocalization

Sample image and/or code

Background

I am studying mitophagy: comparing expression of proteins (TAX1BP1, OPTN and SQSTM1) when damaged. These proteins are particularly linked to the atg5 gene and so I have a bunch of images which compare atg5 knockouts to wild type cells and comparing them when they are damaged. The image above is just one such example where the red colour is my protein of interest, green is Cytochrome C (mitochondria proxy) and blue is my DAPI stain. These images are taken using a wide-field microscope.
To note, I have around 20 images in each of the bins (TAX1BP1,atg5KO, damaged is one bin for example), these images are presented as separate channels and are 2D.

Analysis goals

I want to evaluate the differing levels of protein interaction with mitochondria between damaged cells and undamaged cells nested within different cell lines. The aim here is to see which proteins are expressed close to mitochondria in the event that atg5 is not present. I want to carry out a quantitative analysis.
As for now I have:

  • made a k-nearest neighbour metric which takes the image files as an data-frame such that each row refers to the co-ordinates of a pixel (X and Y) and the pixel value at that co-ordinate
  • this metric uses a KD-tree to search the nearest neighbours
  • I have carried out a sensitivity analysis for search radius as the distance of interaction between my protein of interest and cytochrome C is not clearly defined
  • I have tried to account for the noise associated with wide-field microscopy by setting arbitrary thresholds for pixel values (from 500-1000) which I have compared qualitatively using imagej’s plot profiles
  • I want to compare my metric to other colocalization studies, namely voronoi tessellation
  • as my comparison I have deconvolved my images so I can use them for voronoi tesellation
    All of my code is carried out in python, whereas images are pre-processed within imageJ (for example getting the X, Y co-ordinates)

Challenges

Since I have not carried out research like this before, I have carried out a number of sensitivity analyses (search radius, Thresholding and different point spread functions for deconvolution) but I do not know whether or not it is a valid approach. I would essentially like some feedback on whether I am doing too much or if there are maybe some major assumptions which I am disregarding.
One important question is: since I have a 2D image of 3D cells how do I differentiate pixels which may only be close to each other via the z-axis as compared to the other two?

This is my first time doing a study like this so I would be more than happy to receive any sort of feedback, more notably if there is anymore information you require to help answer my question please do say.

Standard problem in most of light microscopy (aside from some lightsheet or other special techniques). The Z axis is always a problem, and the intensity of your fluorophores will depend on how close to the focus plane they are. Especially in a widefield microscope, there is not tooo much you can easily do about it. One approach might be taking small Z stacks and performing deconvolution to try to remove certain kinds of background *edit, I see you mentioned deconvolution.

In general, issues like that limit the size of the effect you can measure. If you have a Manders threshold such that you have 0% overlap in one condition, and 80% overlap in another, then even a low SNR is not terribly problematic.

Pixels are very large compared to a fluorophore. If you are looking for interactions between your proteins, you are looking within a pixel, even at high resolution. FRET/PLA and other techniques exist because, in general, a pixel is a huge volume. If you truly want to look for short range interactions, you might want to look into PLA. Then you only have one channel for two targets, and a very easy measurement.

I realize you are looking for X near Y, but another measurement of interest might be aggregation -

(thanks @smcardle for remembering), or a comparison against a random distribution. In other words, does the red show up within X distance of the green more often than would be expected by chance. You would need a threshold that made sense for your red channel for that to work.

If you do not want interactions but want intensity by distance, create a green mask and then dilate it out 1 or 2 pixels at a time, and check the average red intensity per pixel at X distances. Compare between samples.

Random musings, mostly, hopefully someone else will have better ideas.

Thank you, I think this really helps. I gather there’s probably a lot of ways to do this correctly and incorrectly.

1 Like