We are attempting to use the track objects module to propagate each cell’s object tracking number within a z-stack. The problem is that the tracking module assumes that the first frame is in focus which is fairly true for a time lapse movie. The issue I find is that the tracking module uses the bottom z plane image to propagate the object numbers up through the stack. This is probably a non ideal z-plane for seeding the tracking objects in subsequent frames since it is overly smooth when out of focus which causes objects that come into focus to be given the same object number. Can this module be updated so we could explicitly specify the mid-volume plane to use for the track objects module when analyzing z-stacks?
Would you then like to track in two directions? Up and down from the central plane? As far as I know CellProfiler cannot do this. I think the fastest solution would be if you re-saved your data, e.g. using ImageJ, in a way that the first image is the one where you want to start tracking and the last image where you want to end tracking. For your application I guess you would have to save two data sets (upward and downward) and then “stitch” the corresponding tracks later. e.g., in Excel or R.
That is a creative solution!
I want to take a step back to make sure I understand what Derek is asking…why is it a problem for out of focus parts of objects to be included in the same object ID as the in-focus parts of the same object? If it is a problem, I could imagine the solution being to filter out of focus objects (aka delete or mask them) in the pipeline.
It might help if I explain that the TrackObjects module handles segmentation and tracking completely separately: (1) find all objects in every frame completely independently; (2) figure out who goes with whom across frames. So even if we changed it to ‘start’ tracking in the middle and go both directions the results would be exactly the same, I believe. If there are indeed out of focus objects at the top and bottom they would still be attached to the same object ID of anything they overlap.
I suspect I’m missing something, so please clarify!
I attached copies of the segmentation outlined images with tracking number labels from the TrackObjects module. The first image starts at the bottom of the z-stack while the bottom image started at z=25 which is the mid-volume plane. I assume that the first image has many more nuclei with the same tracking number label because of starting in the bottom plane which is out of focus. I think this causes the same object tracking label to be propagated up the z-stack. I tested renumbering of the mid-volume plane as z=00 which prevents the cells from being given the same tracking label number. Is it possible to sort the images within a batch of z-stacks so that the TrackObjects module always starts in the mid-volume?
I’m confused as well- do you have many layers of nuclei? Which tracking module is it?
A sample image or two and the .cpproj file are probably helpful at this juncture.
The green outlines are from the nuclear counterstain channel, the blue outlines are cell boundary segmentation outlines from the brightfield channel, and the red outlines are from CellMaskRed outlines. I am comparing brightfield vs CellMask cell membrane segmentation results in these images with nuclei that are labeled using the TrackObjects module. This afternoon I will post an example pipeline to demonstrate the issue. Thanks for your help!
I will leave it to others to follow up but if I understand correctly it sounds like from bottom to top, you think that objects are “splitting”, so to speak? I do want to point out there is a thing called CellProfiler Tracer (for Windows only) which might help you see what is going on a little more easily: http://cellprofiler.org/tracer/
Awesome, thanks so much!
Here is the example track objects pipeline. I have an image set to go with it but the file size limit is preventing the upload from completing. Can I upload the images to somewhere else?
TrackObjectsExample.cpproj (664.5 KB)
Sure; you can zip it and put it on Google Drive and share it with bcimini [at] broadinstitute [dot] org .
The image labeled z00 is the midvolume plane relabeled from z25 to z00 to force it to the top of the execution order. If you remove it an start at z01 which is the bottom plane of the z-stack you will see the difference in the object tracking labels. I was hoping that the execution order could be shuffled a bit so that it goes Z-middle->Z-top then Z-midddle->Z-bottom. It seems like a relatively simple fix which could help tracking of objects in 3D.
I see your problem now; my guess is that no one considered that someone would be using that module to track through space rather than time (which is clever btw!), so the values start at “t=1” not “z=middle”. Generally speaking CP expects that images will be sequential, so as of right now there’s no way to do what you want to do.
I know we’re adding 3D support soon so hopefully once we do there will be a module that does exactly what you want, and I can try to make sure there is as the plans for that firm up. In the meantime I see two possible ways to work around this limitation:
A) Rather than the distance method use the LAP method of tracking, run the second phase, and play with the costs of merging and splitting objects such that you get the behavior you want. Note that because LAP will reassign the track object labels when it does the merges and splits the labels as you’re currently generating them are useless- this is where you’ll want the CPTracer that Anne mentioned above to actually see the final “tracks”. I think this will actually perform pretty well but without trying it I can’t know for sure.
B) Do what Christian suggested above and split it in half yourself. You can either do this with some cleverly written renaming script you write, or if you don’t want to mess with that you can do the following procedure in FIJI (it’s a bit annoying but you should be able to macro it quite easily assuming the number of planes is the same every time):
(Note, if you still/already have your images in some format that FIJI can read as a hyperstack ignore steps 1 and 2, I’m just working off the format I have from your email. Also note that I’m assuming the full stack is z1-z50)
- Import your images into FIJI using File->Import->Image Sequence
- Use Image->Hyperstacks->StackToHyperstack to make a hyperstack
- Image->Stacks->Tools->Make Substack to make a substack of 26:50
- File->SaveAs->Image Sequence for half 1 (call it inputname_top)
- Repeat step 3 to make a substack of 1:25
- Repeat 7 for all channels
- File->SaveAs->Image Sequence for half 2 (call it inputname_bottom)
- Run CP as if the two halves were completely separate movies
- Stitch the values back together using in Excel or R
I hope that helped! Be sure to let me know if not.
I’m not really following the entire conversation, but I just realized
LoadData might do what you want. Are you just trying to load the images in
a stack out of order? LoadData lets you load images in whatever order you
like, and I assume this goes for the frame number within a movie. So
perhaps all you’d need to do is script something to write the ordering
of the frames you want?
That was the temporary work around idea until we start using 3D object segmentation. It works fairly well if the order of the two LoadData file lists go from the mid ->last image plane and mid -> first image plane. I found that if the same mid-volume image is used for both file lists the object tracking labels will stay the same for the top and bottom stacks.
Hi all! I’m very new to the CellProfiler community, but loving it so far! I have a very basic question, please excuse me if it has been answered elsewhere.
I have a z-stack of images, and I would like to track the nuclei through the slices so I can later extract so details (fluorescence intensity etc) from them. The segmentation work well, but when I load multiple images, the pipeline only treats them individually, rather than tracking through 3D. I have compared my pipeline to one where the 3D tracking on nuclei works, but can’t find the error.
Thanks a lot!
Welcome to the community!!!
I am not sure what kind of images you are upload i.e. is it a single Z-stack image or individual images with respect to every frame in the Z-stack?
In the later case, you might need to use the “groups” to group them as a set of images. Also, you may want to use the 3D option.
In any of the case, it would be great to share the sample image & your pipeline to help you better!!
Read more on our site.
Yokogawa CV8000 - The Ultimate in Confocal HCS
Join us at, https://www.slas2020.org
Hi @Lakshmi, thank you so much for your reply!
I have a z-stack series, of which I want to segment the DAPI channel. So the person who introduced me to CellProfiler is on holidays now (unfortunately), but suggested to save the channel as Image Sequence, using Fiji, and use these images in CP to allow for nuclei tracking through 3D.
Sophie_1.cpproj (449.1 KB)
Attached is the rudimentary pipeline I have so far, and a couple of Z-slices of the DAPI channel through which I need to track my Primary Objects (i.e. nuclei). Thanks a lot for your help! I will keep trying to fix it, but all attempts with grouping didn’t work in my hands (but I don’t know how to use this tool very well)
Also, here is an example of the segmented and tracked file of __0013 to show that the pipeline works per se.
Groups in CP is nothing but you are grouping set of images together for processing. In your case I have grouped all the images (from every plane) from a channel. You can know more on Groups from here. So You might have to clearly indicate the channel, Plane, Experiment details in the NameAndTypes. This info is available in your metadata or you have to extract from your file name. In this case, I have got it from the file name.
In the save Images module I am saving after everycycle (here every set of images i.e. full stack in your case) in a subfolder with the Sequential filename in a subfolder.
Please find the attached modified pipeline & sample output.
Sophie_11.cpproj (819.2 KB) tracks.zip (64.4 KB)
Read more on our site.
Yokogawa CV8000 - The Ultimate in Confocal HCS
Join us at, https://www.slas2020.org
Thank you very much @Lakshmi, that was very helpful! The one thing I’m worried about now is that the same nucleus is not labelled with the same number throughout the z-slices, which might be problematic once I need to extract fluorescence intensity from the whole of each nucleus throughout the stack. I hoped that the ‘Max pixel distance’ would have ensured they are counted as one. (Maybe too small?)
Can I use the RelateObject function for ‘relate’ different objects that represents the same nucleus throughout the z-slices, to indicate they belong together? I’ll read up more on the instructions you sent, too!!