Single Cell Analysis_Time-lapse analysis

Hello everyone,

I just started using CellProfiler (literally today) to track fluorescence intensity over time in single cells. I watched many tutorials but I am still having some difficulties. I hope you guys can help me with this. This is what I need to do:

  • identify and mark position of single cells in a first image (stained with PI). The cells are fixed so they do maintain their original position.
  • overlay these outlines on a following set of non-stained images to track fluorescence intensity over time.

Unfortunately, I cannot manage to overlay the positions selected on the first image. The software re-selects new positions for each new image, constantly increasing/decreasing the total number of cells!
How can I fix this issue?

Thank you in advance for your help!

Hi @GiammarcoNebbioso,

Welcome to the image.sc forum! We’d be happy to try to take a look at your workflow to see if we can help. Could you share your initial pipeline, example images, and some screenshots to illustrate what you mean when you say:

The software re-selects new positions for each new image, constantly increasing/decreasing the total number of cells!

That will help us as we try to help you. Welcome to the CellProfiler community!
Pearl

Hi @pearl-ryder ,

thank you for getting back to me, I appreciate it.
This is the pipeline I used.
first attempt.cpproj (810.5 KB)

My goal is to first identify single cells from one image, such as this one; these cells were PI stained. The cells do not move during the experiment.
PI3 pos1.tif (256.4 KB)

Once the positions of the cells are marked in the PI stained image, I need the software to overlay such positions on the rest of the images, which were taken in a time-lapse manner.

The big goal is to measure fluorescence intensity over time in these single cells. As you can see from the following spreadsheet, the software identifies primary (nuclei) and secondary (cells) objects for each image from the sequence (column 2 and 3). Instead, I need the software to maintain the positions of the 376 cells identified in the first image (the PI stained image) and simply measure the increase/decrease of fluorescence on each one of the time-lapse images, on these same 376 positions. I thought I had solved this issue by adding the OverlayObjects function but it does not work.

I also use this occasion to ask you another question. Technically, I have 10 different positions for each experiment. Meaning each position contains a different PI stained image and different time-lapse images. The total number of time-lapse images is the same for each position instead and they were all taken at the same time. For example, I would have 10 images at time t=1, 10 images at t=2 etc.
My question is: is there a way to run the same analysis that I described before on this set of images all at once or do I have to run a new pipeline for each position?

I hope my explanations were clear enough.
Thank you in advance for your support, it is much appreciated!

Giammarco Nebbioso

So in general, what you’ll want to do is to run two pipelines consecutively-

  1. One in which you load in all of your movies, but look only at the first frame of each, and then save out a picture that represents the location of all of the objects. You only need to do this step once.
    THEN
  2. A pipeline that loads in all of your movies (fine to load them all, make sure Groups is turned on and set correctly!) AND the picture of the objects that you made, and applies that picture of the objects to each and every frame of your movie. You can read more about that in the documentation for the “Image set matching order” option in NamesAndTypes, as well as in this post below (and the posts linked from it).

Good luck!

Hello @bcimini,
thank you for your help.

I have managed to create the first pipeline (I think). pipeline1.cpproj (812.4 KB)
I saved the images as " objects themselves (by using ConvertObjectsToImage in “uint16” mode then by saving a 16bit tif", as you mentioned in other posts. This is an example of one of the images I obtained. Is it normal they are all black? PI3 pos1.tiff (512.3 KB)

Also, I cannot figure out the second pipeline. This is what I got so far (to make it easier, I only analyzed one position.)
attempt3.cpproj (193.6 KB)
Specifically, I cannot figure out how to set up the metadata. I followed the thread example you shared. I cannot manage to group all the images by folder name. Also, I do not quite understand the last step as I cannot relate it to my project:

“[…] Then you need to add all your image types to NamesAndTypes- once you do, you can now match by the common ‘Folder’ Metadata, and then for your two channels of fura add the T dimension, like so.”

I really appreciate your help in advance.

Giammarco

Hi Giammarco,

That image may look all black in your system’s photo viewer, but that’s just due to it being a 16 bit image. Here’s what it looks like when I open in it in ImageJ.

To get the second pipeline to work, you need to be extracting at least two things in the Metadata module -

  1. Some piece of metadata that will link your object file to your movie- maybe it’s the position number, maybe the folder it’s in, etc. Anything will work, as long as it is a) unique for each movie and b) identical for all the timepoints of your movie AND the object file
  2. Some piece of metadata that states the timepoint of each movie; images from different movies can have the same piece of metadata (ie there can be a “Timepoint” or whatever you call it that equals 1 in every movie), but it must be unique for every frame of your movie.

The first piece of metadata will go in the first row of the matching table in NamesAndTypes for all the channels/objects that you have (it will also be what you group by in the Groups module); the second piece of metadata will go in the second row of the matching table, but ONLY for the channels that change every frame, not for the objects or anything else that should be held constant the whole movie.

I’ve screenshotted below what I mean - in this I assumed that the piece of Metadata that would link everything is called “Position” and the piece unique to each frame is “Timepoint”. Because I don’t know exactly how your files are named and/or will be arranged, I haven’t uploaded the file because you’ll probably have to tweak this to be correct for how things look on your system, but this should give you the idea a bit better hopefully!

Hi @bcimini ,
thanks so much for your help. I really appreciate it. I have managed to set up the metadata correctly (I think). However, at my first attempt with such pipeline pipeline2.cpproj (977.2 KB), I realized that the final spreadsheet didn’t detect any object from the initial image. I thought that ConvertImageToObjects would do it, but it is clearly not the case.
I then tried to set up the background image (PI) in NamesAndTypes as Objects and the timeframe images as ‘Binary mask’ and ‘greyscale’ in ‘Select the image type’. However, when I try this, I obtain this message:

What am I doing wrong???

What happens if/when you hit “OK”? Do you get anything in NamesAndTypes or no? I can’t see the matching also in that screenshot, have you confirmed it’s correct?

(You also don’t have to use ConvertImageToObjects in the pipeline, loading the image as type “objects” is sufficient)

If I hit OK, that’s what I get.

If I set both the image inputs as ‘Greyscale’ (and not as Objects for the background image), I obtain the following matching:

Screen Shot 2021-03-26 at 15.38.31.

This is how I set up Metadata and NameAndAssign to obtain the above matching:

You should be able to set PI as objects, you may just need to check that when you do that, you may need to re-set-up the matching at the bottom.

Thank you Beth,

I finally managed to obtain what I was looking for by re-setting the matching at the bottom.

I really appreciated your help!