I’m working with recombinant FRET- and ddRFP-based protease sensors, monitoring changes in the FRET ratio/ddRFP fluorescence intensity in live cells over time upon the protease activation. I need to get “spatiotemporal” information about the protease activation in the cells, i.e., in what part(s) of the cell it starts and how it spreads, at least on the level of soma vs. protrusions.
The problem is that my cells move and greatly change morphology over time (see the attached examples from different time points): the soma shrinks and moves, the nucleus condenses and moves together with the cell, the protrusions extend, erode and then extend again, etc. All this makes it very difficult to apply masks and ROIs for consistent and reproducible analysis; furthermore, it is very difficult to preserve the low-intensity protrusions upon thresholding (see the example).
For these reasons, I’ve been so far manually segmenting the images, selecting the cell and setting the rest of the pixels to 0 for each time point separately; given that I have around 300 time points and 2 channels per experiment, you can probably imagine how painful it is.
So I’m wondering, is there any possible way to create a proper mask (e.g. by thresholding or Trainable Weka Segmentation) and then “tell” Fiji to “extract” the pixels from the raw image that belong to my cell only and set the rest to 0?