Sample image and/or code
Background
I am studying mitophagy: comparing expression of proteins (TAX1BP1, OPTN and SQSTM1) when damaged. These proteins are particularly linked to the atg5 gene and so I have a bunch of images which compare atg5 knockouts to wild type cells and comparing them when they are damaged. The image above is just one such example where the red colour is my protein of interest, green is Cytochrome C (mitochondria proxy) and blue is my DAPI stain. These images are taken using a wide-field microscope.
To note, I have around 20 images in each of the bins (TAX1BP1,atg5KO, damaged is one bin for example), these images are presented as separate channels and are 2D.
Analysis goals
I want to evaluate the differing levels of protein interaction with mitochondria between damaged cells and undamaged cells nested within different cell lines. The aim here is to see which proteins are expressed close to mitochondria in the event that atg5 is not present. I want to carry out a quantitative analysis.
As for now I have:
- made a k-nearest neighbour metric which takes the image files as an data-frame such that each row refers to the co-ordinates of a pixel (X and Y) and the pixel value at that co-ordinate
- this metric uses a KD-tree to search the nearest neighbours
- I have carried out a sensitivity analysis for search radius as the distance of interaction between my protein of interest and cytochrome C is not clearly defined
- I have tried to account for the noise associated with wide-field microscopy by setting arbitrary thresholds for pixel values (from 500-1000) which I have compared qualitatively using imagej’s plot profiles
- I want to compare my metric to other colocalization studies, namely voronoi tessellation
- as my comparison I have deconvolved my images so I can use them for voronoi tesellation
All of my code is carried out in python, whereas images are pre-processed within imageJ (for example getting the X, Y co-ordinates)
Challenges
Since I have not carried out research like this before, I have carried out a number of sensitivity analyses (search radius, Thresholding and different point spread functions for deconvolution) but I do not know whether or not it is a valid approach. I would essentially like some feedback on whether I am doing too much or if there are maybe some major assumptions which I am disregarding.
One important question is: since I have a 2D image of 3D cells how do I differentiate pixels which may only be close to each other via the z-axis as compared to the other two?
This is my first time doing a study like this so I would be more than happy to receive any sort of feedback, more notably if there is anymore information you require to help answer my question please do say.