How to tracking spots in time lapse and Zstack

Hi everyone! I need your help!

My name is Sara, I´m working in telomere (labeled with GFP) movement analysis in yeast cells. To do this I have to follow a point during a time lapse and look for the Z-stack where the point is most focused (my point moves in Z, Y and Z). I use ManualTrack or MTrackJ plugins of ImageJ, but with these plugins I have to manually select my point of interest throughout all time lapse and look for the focused Zstack, for each of my cells. I would like to find one or some plugins to analyze my images faster. For this I need to find a plugin that automatically detects the telomere marked in each cell in the most focused Zstack, draws for me the tracking that follows along the time lapse and measure the average speed at which my point moves and the distance it travels (this measures are automatically obtain with the MTrackJ plugin). I would like to know if someone knows how to do this and what plugins could be use, or maybe explain me how to make a simple macro.

Initially, to try to simplify the process, I thought about doing the maximum projection, but how my point moves faster than time that takes for the microscope to perform the full zstack, sometimes 2 points appear instead of one. So I need a plugin that can detect the point in a zstack.

Also, I have tried to use the “find maximum” (single points) for points detection. It detects points correctly, but only in a frame. I have found a macro to make maximum Find stack, and it seems that it looks for points throught Zstack and time lapse, but after they are not marked in image.

I have also tried using TrackMate, but with this plugin I don´t know how obtain adjusted conditions for automatically select my points.

I don´t have any more ideas!! If someone can help me or suggest me any plugin or explain me how to write a simple macro… it would be great!

I attach an example of my images and a macro that I found.macro.txt (1.3 KB)

Hi Sara, I think you forgot to upload some example images. It would be great if you also describe how you are acquiring your z-stacks and your general imaging conditions.

In terms of analysis, it really sounds like your problem can be solved through TrackMate. What parameters are you hoping to extract from your trajectories?


Hi, thank you for your answer!
I´m acquiring my images in Delta Vision microscope, with 100X objective.
A stack of images spanning 7 planes at 0,6µm increments was recorded at each time point (total thickness 4,20µm). Also, I acquire 1 image each 8s, total time180s (23 frames). Conditions: 0,200s GFP.
I´m interesting in kSorry, I had some problems to attach it. I will try average velocity of movement of my spot, and distance between time1 to time23.
Sorry, I had some problems to attach it. I will try again (my image has 322MB).

I can not attach it. How can I send image to you?Could you send me your email?

The best thing is to upload to shared storage like Google Drive and provide a link. Then others can also see the images. Also 322MB is a really large e-mail…

Hi, Merry Christmas!
I send you two links so you can see the images:

There are two files, to make these images I did a time lapse with GFP, and at the same time I took a reference image in DIC. Also, for GFP channel I did a z-stack for each time to keep my point in focus.
Now , I have an aditional problem. Some of my images are a bit displaced. This is easily observed in the time-lapse of the reference image in DIC. As I acquire in time 1 (all zstack in GFP) and image in DIC, time 2 (all zstack in GFP) and image in DIC … I would like to know how to use images in DIC to eliminate this displacement in GFP images. Is this possible?It is important that the alignment of the image be with DIC image and not with spots in GFP, because they are moving,and that movement is what I need to quantify!. I have ever used the stackreg plugin to eliminate displacement but this does not work for zstacks. I’ve heard about the multistackreg plugin but I can’t find where I can download.

Also, I have been testing new conditions with the TrackMate plugin and I have obtained some improvements. I have used following conditions:
DoG Detector estimated blob diameter:0.5 threshold 4, activate median filter, do subpixel localization.
Hyperstack displayer
Select a tracker: Simple LAP tracker (I don’t know if it is the best option for my images)
Linking max distance 1,5 micron, gap-closing max distance 2.0 micron, gap-closing max frame gap 2 micron

But I still have some problems:
-I do not understand very well the analysis of my results. I don´t know how identify each nucleus to analysis, the plugin identify with a name each spot not each nuclei.
-When I do tracking and I observed analized spots, sometimes I see a larger circle and sometimes a smaller one, is it good?Does the program use centroid in both cases to analysis?
-Sometimes I observe a clear spot in all timelapse but instead of a trajectory I obtain two separate trajectories, is this a problem of detection by diameter or intensity of spot?
If I have to be testing each nucleus, maybe I take less time doing it manually :frowning:
Does anyone know how I can improve my detection conditions?

Does anyone know where I can download the plugin multistackreg to align fluorescence Z-stack image ? Thank you!

Hi @biologa

I tried to align your sample image.
First, I aligned the DIC image by my plugin, CoordinateShift.(see. gif image)
It can record the shift position.
Then, the GFP z-stack images were alined using that position data by CoordinateShift.
It can also shift the image without z-projection.(each z-slices are shifted same position)
Then I tried to track several points by my plugin.(ZahyoHyper, it does not upload yet)
It can record the coordinate the position that is clicked position.
And with peak search function, the highest intensity position(x,y,z) can be detected.
Is it like this?