Automatically Detecting transcription sites in image stacks and measuring

Hi there,

I’m a pure biologist and the idea of coding and ImageJ intimidates me a bit,…

I’ve performed flourescence in situ hybridization to a cancer cell line and taken images of these cells in 3D (76 Z plane stack images per field). I need to analyze, for 400 cells X 3, how many transcription sites each cell has, how big they are (area) and what their intensity is. A cell can have from 0-3 sites, and each site’s optimal resolution may be on a different Z plane. I can’t simply make a circle around the site and hit “measure” because the sites are not perfect circles.

Is there a plugin that I could run in which I could tell them what a site looks like/a minimum brightness threshold that could be a site, and the program will find all the sites for me and measure them per cell? It feels silly to do this analysis by hand in the year 2016 when so much great software is out there…

Thank you so much for your help!

here’s a link to the stack: https://drive.google.com/file/d/0B46t8Vo1yrqzLUE4VDREcGFWNFk/view?usp=sharing

Hello @git.rei

Welcome to the Forum!

This sound like a segmentation task. Maybe with some model learning.

If you could post one example stack it would be easier to get you started.

please post representative example images with comments and respect the fact that doing things by hand need not be silly, even in 2016.

Best

Herbie


here are two representative stacks, one with smaller sites and one with larger…to me, it is clear what are sites and what are artifacts (sites are more ‘bright’), is there a program that can help me with measuring there number and area/intensity (in pixels and hue)?

i see these are images and not stacks…I don’t know how to upload stacks :confused:

If it is not possible to upload the stacks directly you could use dropbox/googledrive or another online storage.

please tell us something about what is what.

“to me, it is clear what are sites and what are artifacts”

To me it is not. The second image shows various faint dots…

“(in pixels and hue)”

You’ve posted gray-level pictures, consequently there is no hue data available.

Best

Herbie

sorry if this is obvious to you…but how can i share a google drive file with you through this forum?

Hi there, thanks for the response! I may have not used the correct term…by ‘hue’ i mean maybe brightness? I want to know area in pixels and intensity in how white the pixel is (which is a number, where 0 is black and the more bright a pixel is the higher it goes up)

I uploaded an example of what are sites and what are not in the wildtype cells that have small and possibly vague sites… what do you think? can i do something here?

Thanks again for your help!

To me things appear more complicated.

Regarding the indicated spots you may separate them by thresholding from the others. In th every cases, thresholding gets you an idea of the interesting positions. Then you can determine the size of the interesting spots.

However, what about the remaining cells of this image. They show spots too. They are fainter than those indicated in the two cells. Is it possible that these faint spots are of interest or are they definitely and always artifacts?

For example what about the spots in the green frame?

In short, thresholding is a difficult means…

Best

Herbie

Thank you for the example stack.

If we take a look at the stack, we can observe that only some z-slices in the middle are truly in focus. The other slices are blurry. The blur is a result of diffraction and out-of-focus light.

One way to get sharp images would be a deconvolution. But to perform a deconvolution you need the point spread function (PSF) of your microscope, which is normally unknown.

Because you want to segment bright spherical spots, we can use the Difference of Gaussian approach. With a non-symmetric Kernel the PSF could be approximated. I took a Kernel which is three times bigger in Z-Direction than in X-Y-Direction.

After the DoG a maximum projection is performed to get the bright spots from all Z-Slices. In the example stack this works well, because there are no overlapping bright spots. If there are other stacks with overlapping bright spots, the projection should be omitted. But if the in-focus part is thinner than the bright spots this should always work.

In the next step another DoG is performed. The second DoG just enhances the round spots.

This is the result I get:

I did the analysis with KNIME Image Processing and here is the workflow (41.1 KB).

Because of the blur due to the PSF the area probably isn’t the best measurement. Same goes with the intensity. Because of the PSF you can’t reall say where the exact location of the spots is. Unless you use some deconvolution. Maybe the count of bright spots could help? But I am not a biologist :slight_smile:

3 Likes

Getting deconvolution and a diffraction based PSF into imagej-ops (and thus KNIME) will really help here.

Normally we don’t know the PSF of the microscope. However we can get a pretty good estimation using a theoretical PSF with spherical aberration (usually the dominant aberration).

Alternatively we could try and extract the PSF from beads (either a bead image, or beads embedded with the sample). With KNIME we could set up a workflow where one branch segments the beads and approximates the PSF, and another calculates the theoretical PSF.

Then feed both PSFs into a deconvolution/segmentation/measurement/classification workflow. Assuming we are processing a large amount of data, the next morning we find out if using theoretical vs. measured PSF makes a difference in terms of measurements and subsequent classification. (or maybe we find out by lunch if we use GPU).

5 Likes

Thanks so much for this workflow and your help. I understand that due to some unknowns (PSF) we can’t get exactly accurate measurements, however, this is the reason that we have a control and an experiment. I want to know the relative difference in intensity and area between all the sites I find in my wildtype cells and the sites of treated cells… (see the two example pictures attached, which I uploaded earlier). How can I make these measurements in an automatic way?

Thanks again!!

G


The workflow I proposed returns the segmentation of your sites (unless I segmented the wrong stuff?). On the segments you can compute all kind of statistics (size, mean intensity) with the appended “Image Segment Features”- and “Segment Feature”-Nodes.

Maybe I misunderstood you.

Hi @git.rei

Do you use KNIME?? The workflow @tibuch shared is made for KNIME, not ImageJ.

If you don’t know KNIME you should download it and try @tibuch 's workflow (feel free to ask lots of questions if you don’t know KNIME, I bet others will be interested in learning how to get started).

If you allready know KNIME, but had some issues running the workflow let us know more details. I ran the workflow without a problem. The next step I would do is add a control image to the file reader. Then the entire work flow will run on both images.

Once you have the basic workflow running, the next step is to somehow adjust the work flow to assign a separate label to experiment and control, in order to compare groups. We can ask the KNIME experts advice on this part.

3 Likes