How to tracking spots in time lapse and Zstack

Hi everyone! I need your help!

My name is Sara, I´m working in telomere (labeled with GFP) movement analysis in yeast cells. To do this I have to follow a point during a time lapse and look for the Z-stack where the point is most focused (my point moves in Z, Y and Z). I use ManualTrack or MTrackJ plugins of ImageJ, but with these plugins I have to manually select my point of interest throughout all time lapse and look for the focused Zstack, for each of my cells. I would like to find one or some plugins to analyze my images faster. For this I need to find a plugin that automatically detects the telomere marked in each cell in the most focused Zstack, draws for me the tracking that follows along the time lapse and measure the average speed at which my point moves and the distance it travels (this measures are automatically obtain with the MTrackJ plugin). I would like to know if someone knows how to do this and what plugins could be use, or maybe explain me how to make a simple macro.

Initially, to try to simplify the process, I thought about doing the maximum projection, but how my point moves faster than time that takes for the microscope to perform the full zstack, sometimes 2 points appear instead of one. So I need a plugin that can detect the point in a zstack.

Also, I have tried to use the “find maximum” (single points) for points detection. It detects points correctly, but only in a frame. I have found a macro to make maximum Find stack, and it seems that it looks for points throught Zstack and time lapse, but after they are not marked in image.

I have also tried using TrackMate, but with this plugin I don´t know how obtain adjusted conditions for automatically select my points.

I don´t have any more ideas!! If someone can help me or suggest me any plugin or explain me how to write a simple macro… it would be great!

I attach an example of my images and a macro that I found.macro.txt (1.3 KB)

Hi Sara, I think you forgot to upload some example images. It would be great if you also describe how you are acquiring your z-stacks and your general imaging conditions.

In terms of analysis, it really sounds like your problem can be solved through TrackMate. What parameters are you hoping to extract from your trajectories?


Hi, thank you for your answer!
I´m acquiring my images in Delta Vision microscope, with 100X objective.
A stack of images spanning 7 planes at 0,6µm increments was recorded at each time point (total thickness 4,20µm). Also, I acquire 1 image each 8s, total time180s (23 frames). Conditions: 0,200s GFP.
I´m interesting in kSorry, I had some problems to attach it. I will try average velocity of movement of my spot, and distance between time1 to time23.
Sorry, I had some problems to attach it. I will try again (my image has 322MB).

I can not attach it. How can I send image to you?Could you send me your email?

The best thing is to upload to shared storage like Google Drive and provide a link. Then others can also see the images. Also 322MB is a really large e-mail…

Hi, Merry Christmas!
I send you two links so you can see the images:

There are two files, to make these images I did a time lapse with GFP, and at the same time I took a reference image in DIC. Also, for GFP channel I did a z-stack for each time to keep my point in focus.
Now , I have an aditional problem. Some of my images are a bit displaced. This is easily observed in the time-lapse of the reference image in DIC. As I acquire in time 1 (all zstack in GFP) and image in DIC, time 2 (all zstack in GFP) and image in DIC … I would like to know how to use images in DIC to eliminate this displacement in GFP images. Is this possible?It is important that the alignment of the image be with DIC image and not with spots in GFP, because they are moving,and that movement is what I need to quantify!. I have ever used the stackreg plugin to eliminate displacement but this does not work for zstacks. I’ve heard about the multistackreg plugin but I can’t find where I can download.

Also, I have been testing new conditions with the TrackMate plugin and I have obtained some improvements. I have used following conditions:
DoG Detector estimated blob diameter:0.5 threshold 4, activate median filter, do subpixel localization.
Hyperstack displayer
Select a tracker: Simple LAP tracker (I don’t know if it is the best option for my images)
Linking max distance 1,5 micron, gap-closing max distance 2.0 micron, gap-closing max frame gap 2 micron

But I still have some problems:
-I do not understand very well the analysis of my results. I don´t know how identify each nucleus to analysis, the plugin identify with a name each spot not each nuclei.
-When I do tracking and I observed analized spots, sometimes I see a larger circle and sometimes a smaller one, is it good?Does the program use centroid in both cases to analysis?
-Sometimes I observe a clear spot in all timelapse but instead of a trajectory I obtain two separate trajectories, is this a problem of detection by diameter or intensity of spot?
If I have to be testing each nucleus, maybe I take less time doing it manually :frowning:
Does anyone know how I can improve my detection conditions?

Does anyone know where I can download the plugin multistackreg to align fluorescence Z-stack image ? Thank you!

Hi @biologa

I tried to align your sample image.
First, I aligned the DIC image by my plugin, CoordinateShift.(see. gif image)
It can record the shift position.
Then, the GFP z-stack images were alined using that position data by CoordinateShift.
It can also shift the image without z-projection.(each z-slices are shifted same position)
Then I tried to track several points by my plugin.(ZahyoHyper, it does not upload yet)
It can record the coordinate the position that is clicked position.
And with peak search function, the highest intensity position(x,y,z) can be detected.
Is it like this?


Sorry for the delay in replying:

As Hwada mentioned you could use his CoordinateShift plugin to align your images. For your experiment, I don’t think it makes too much sense to align to DIC.

As I understand it, you want to track the mobility of your tagged telomeres in each nucleus and be able to align the images to reduce motion artifacts. In this setup, you also need to take into account nuclear movement (rotations and translational movement), not only your whole sample moving. There are a couple of ways to do this:

i) Label your nuclei and get the center of mass: You can do that by for instance labeling the nuclear periphery (tag Nup49 for example), fitting an ellipse and then getting the center (see here). Alternatively, just make the whole nucleus fluorescent so you can segment and get the center from that. You may consider a live cell nuclear dye such as SiR-Hoescht or tagging an abundant nuclear protein. The main issue with this approach is that you don’t correct for nuclear rotation but you will be able to correct XY movments. Your shared GFP image would not be ideal to segment the nucleus.

ii) Normalize all movement to a fixed nuclear position: Tag another protein embedded in the nuclear periphery. In yeast you can use the spindle pole body so you could just tag a subunit like spc29 and acquire two-color images. This is probably the best approach for you imo.

From your GFP spot image you do: 200ms exposure x 7 slices every 8 seconds for 23 timepoints on a Deltavision. It sounds like the only reason that you do the Z is to get your foci in frame and you say you can’t do a max project and track as you get double spots sometimes. You should be able to reduce your exposure settings to ~20 ms per slice (increase laser power if needed), then you could max project everything, track in 2D and your life will be far easier. You don’t need a pretty image to do tracking and you could potentially do deconvolution if you need it.

You would have to label your nuclei first. I would suggest investigating KNIME to set the analysis. It is quite easy to setup label dependencies (Nucleus1, spot,1,2; Nucleus2, spot,3,4 etc) and supports Trackmate. Here is a link to some sample workflows:
Counting chromosomes
Using Trackmate in Knime

If you were testing different blob diameters and clicking preview, perhaps the preview images of different blob sizes stayed in the overlay.

This is probably just due to the linking setting in Trackmate. Instead of using Simple LAP tracker you can use the LAP tracker which has a bunch more features like track merging and also feature penalties for spot quality etc.

I would have also used the DoG detector and roughly the same settings you have used for the detection.

I hope this barrage of information is useful!

1 Like

Hi hwanda,
Thank you very much for information and your help! I did not know your plugin.
I started using a plugin that Giovanni Cardone happened to me before your answer. It also works very well for image alignment, but I am sure that your plugin will be very useful for other experiments. Thank you again!

Hi Andrew,
Thank you very much for your information and ideas!!
Finally I was able to solve my image alignment problems with a script that Giovanni cardone happened to me (I think that align my images to DIC is correct, I don´t understand very good why you say that it don´t make sense …).
Thank you to send me link to CoordinateShift plugin, I had not found it yet!.
I will need some time to investigate some of the plugins you mentioned to me :slight_smile: , but I think some of your ideas are good and it may be very useful for future experiments! I will think about it.
Thank you very much for your time, your comments have been very useful,
I am sure that in a short time I will have new doubts again :slight_smile: . Thank you again!

1 Like