Fluorescent Muscle Sections

Hello,

I am looking to get some help on setting up a pipeline to analyze fluorescent images of transverse muscle sections. I am new to Cell Profiler, and additionally I believe my images present some unique challenges, so I’m looking to “poll the audience” to see what the best way to go about this endeavor is. I’ve attached a JPEG image that represents the kind of image I am trying to quantify (the TIF file was too large, so this pic is a low-res example) So here are my questions:

  1. The picture you see is actually compiled from multiple photos taken at 10x. I used photoshop to stack the pictures one on top of the other (aligned the layers, then flattened the image, filled in the background). I’ve read elsewhere that using photoshop is a bad idea when you want to process the image on Cell Profiler, but I’m hoping that since I didn’t manipulate any of the colors (no manipulation of contrast, saturation, hue, etc.) that simply stacking the images will not affect the program’s functionality. I can always analyze individual pictures, however, so this problem could be avoided.

  2. There are a couple of things I’ve stained for on this image. The first is laminin, which stains the outline of each individual muscle fiber. The blue stain is DAPI to identify all nuclei. The red is Pax7, a stain used to identify satellite cells (the stem cells of the muscle). Using these stains, i’m hoping I can quantify a number of things enumerated below:

A)The number of satellite cells in the image (colocalization of DAPI in blue, and Pax7 in red)
B)The total number of fibers in a transverse section
C)The total cross-sectional area of all of the muscle fibers.
D)The average cross-sectional area of an individual fiber.
E)The number of injured fibers (identified by centrally nucleated fibers [fibers with a DAPI stained nucleus located near the center of the fiber])

An additional consideration is that some areas of the transverse sections are not entirely perfect. I think I read somewhere that there is a way to manually exclude certain areas of the picture for analyzing. This would probably be a good idea in my case.

So, any suggestions out there to begin with? Also which one of the examples do you think would be best to model my pipeline after?

I’ll be reading through the manual and eagerly awaiting any suggestions. Thanks!

Sincerely,
Corey


It might actually be more useful to tailor the pipeline to individual images rather than a composed section on photoshop. So you can ignore point 1 above, and instead use an image like the one attached below to understand what I’m trying to look at. I’m not sure if this would change the pipeline a whole lot…if the pipeline could handle both images that would be great…but I have doubts about that possibility. Thanks. Hopefully I’ll have a cursory pipeline sometime in the future to look at.


To accomplish this, I would create a pipeline something like the following:
[ul]
[li]LoadImages to input each fluorescent channel (you could use the combined image, but then you would have to split them apart anyway).[/li]
[li]IdentifyPrimAutomatic on the DAPI channel to identify the all nuclei cells.[/li]
[li]IdentifyPrimAutomatic on the Pax7 channel to identify the Pax7-positive regions[/li]
[li]Use Relate to establish a parent-child relatioship between the nuclei (the parents) and the Pax7-positive regions (the children).[/li]
[li]ClassifyObjects to classify the nuclei as to whether they have 0 children or greater than 0. In other words, the nuclei with > 0 children are the ones which overlap with a Pax7-positive region and are therefore satellite cells. This will take care of task A.[/li]
[li]You could identify the fibers in one of two ways:
IdentifyPrimAutomatic to identify the fibers. You could do this in one of two ways:
[list]
[]Identify the bright fiber outline with IdentifyPrimAutomatic, probably using RobustBackground as the method. Use ConvertToImage to convert the objects to a binary image, ImageMath to invert it, then use IdentifyPrimAutomatic on this image with a threshold method of “Other”, and set it to 0.5 in order to get the fibers.[/li]
[li]Invert the image using ImageMath, and then identify the fibers themselves, probably using MoG as the method with the fraction covered set to a high percentage, like 0.99[/li][/ul]i’d probably go for the 1st option. In either case, IdentifyPrimAutomatic will give you a fiber count as well, which takes care of B.[/
:m]
[li]Once you’ve identified the fibers, use MeasureObjectAreaShape to measure the sizes of each one.[/li]
[li]Use MeasureImageAreaOccupied to measure the total area occupied by the fibers. This is task C.[/li]
[li]Relate the fibers (parent) to the nuclei (children)[/li]
[li]ClassifyObjects to classify the fibers as to whether they have 0 children or greater than 0. In other words, the fibers with > 0 children are the ones which overlap with the nuclei, and are therefore injured. You may want to use ExpandOrShrink on the fiber objects to make sure they don’t overlap with the nuclei sitting between the fibers. This is task E.[/li]
[li]ExportToExcel to get all the measures. This will also give you a per-image average for each measurement, which takes care of D.[/li][/list:u]

The module IDPrimManual can be used for this purpose. The object that you would manually outline can be used to mask out portions of the original image.

I would say to take a look at the ExampleNeighbors pipeline. The tissue structure is similar to yours.

Regards,
-Mark

Dear Mark,

Thanks for your speedy reply. I tried to follow your instructions as explicitly as possible to create the pipeline below. On the first IdentifyPrimAutomatic module (module 2) it throws an error saying the image is not two dimensional and this may be due to the fact that the image is color rather than grayscale.

This gave me a couple of questions:

1)Do I need to insert a ColorToGray module before this first IdentifyPrimAutomatic module, regardless of whether the files are individual channels or are combined images?

2)You mentioned that combined image files can be split into RGB using cell profiler. This would be faster for my use, as it is faster to directly export the single combined file rather than spending the time to split each file using our microscope software and then exporting those files as tiffs and using those as the files for Cell Profiler.

However, it seems like using the combined image files and having Cell Profiler split them into RGB before analyzing them will be much more processor and RAM intensive than splitting the files using other software and using those files as the substrate for Cell Profiler. This may become an issue given that I don’t want to have to purchase a top of the line computer to process these images. So I want to write the modules in a manner such that the necessary RAM to run the program is minimized. Which method would you suggest?

3)If the images are already split into RGB (there’s three separate files for R, G, and B), how should these images be processed in the ColorToGray module since it seems this module requires a combined image file rather than individual channel files?

I’m operating under the assumption that I need to use a ColorToGray module before my first IdentifyPrimAutomatic module. If this is not the case, is there a possibility that there is something wrong with the images themselves?

Thanks,
Corey
Pipeline1.mat (1.71 KB)

You only need a ColorToGray module if you need to work with the individual grayscale (intensity-only) fluorescence channels. But see the caveat to your last question below.

In the combined case, you will have 4 images in memory (1 RGB + R + G + B), whereas in the split case, you have the three R + G + B images. In either case, if you are dealing with large images, you are going to deal with hitting memory limits. There are a couple of approaches:

  • Insert SpeedUpCellProfiler modules in your pipeline to clear out images as soon as you are done with them. Remember once you clear an image, you cannot refer to them again later in the pipeline. So if you are spliting all three, get rid of the original color image immediately after the ColorToGray.

  • Use the ColorToGray module in a piecemeal fashion. Rather than split all the channels at once, split only one channel at a time (filling in ‘N’ for the others). Then, after you use that channel, use a SpeedUpCellProfiler module to get rid of it, then go on to the next channel. You can still get rid of the original color image after the last ColorToGray.

In this case, there is no need for the ColorToGray module unless the individual channels are saved as color even though they look grayscale (the R,G,B values are the same). In this case, you need a ColorToGray module to flatten the image.

Hope this helps!
-Mark

Dear Mark,

Thanks for your help. I will take these factors into consideration as I try to optimize the modules. But I’ve realized I should get the pipeline working error-free before I optimize it; and so, that being said, I’ve hit a roadblock with module 6 (classify objects). I’ve uploaded my pipeline and an example file with this post. Currently I can get the image to run through to module 6, at which point it gives me the error “Error in ClassifyObjects, no measurement with name Children_other_count. Likely the category of measurement you chose, children was not available for Nuclei with feature number other. Possibly specific to image “OrigBlue” and/or texture scale=1.”

I’ve tried to look through the manual for both the relate module and the classify objects module and couldn’t really figure out what was the problem. My only guess is that, according to the manual, there should be a measurement module before the classify objects module. If this is the case, what measurement module should I use? Or is there no need for a measurement module?

I also am not quite sure how to set the ClassifyObjects module for the purpose we had wanted, since it seems to want me to specify settings for measuring intensity, areaoccupied, etc. with specific images and I’m not sure what to input here. It also keeps throwing an error in the command line view in the background saying that the popmenu requires a scalar value. Let me know what your thoughts are.

Thanks,
Corey


Pipeline2forsnapshots.mat (1.78 KB)

Hi Corey,

First off, I noticed that the Relate modules are taking an abysmally long time to run. Second, the error in ClassifyObjects comes from a bug where 0 can’t be used as a cutoff for images with no values less than 0. So I made a couple of changes to the pipeline (attached):

  • Since there were a lot of parent objects to relate against, the Relate runtime was very long. This can be solved by not segmenting the detected objects in the 1st IDPrimAuto module.

  • For both Relate module, I replaced it with a workaround which measures the intensity of the fibers against of a image version of the objects, and then uses ClassifyObjects to see which fibers have a IntegratedIntensity cutoff of 0.0001. Since the Pax7-positive objects in the grayscale image will have a value of greater than 0, but I couldn’t use 0 as a cutoff, I had to use a very small value instead.

Hope this helps!
-Mark
2010_03_11_Pipeline2forsnapshots.mat (1.89 KB)

Dear Mark,

I’ve switched to using images at a higher magnification (20x instead of 10x). Currently I’m working on improving the pipeline for these images.

The issue I’d like to focus on is the identification of the individual muscle fibers I’m looking at. The image I’ve attached i manually counted and it has 66 individual fibers. I’ve cut down the pipeline to just the identification steps for counting the number of fibers on the section just so we can get this working correctly before I try to identify the nuclei.

Currently when I run the identify module for the fibers, it comes up with 81 fibers. The errors comes from the program erroneously identifying the interstitial space as muscle fibers.

I realized there is a filter by object measurement module, so I’ve filtered using the solidity of the identified objects and it seems to work well on the image. However, there isn’t a table that pops up to tell me how many objects have been excluded and how many objects remain after the image has been filtered. Is there a way to have this information come up in a table? Also, can you give me a laymen’s expalanation of what i’m actually measuring when i’m talking about the solidity of the object? From what I understand, it is a measurement of how convex the image is?

Thanks,
Corey
Pipeline for 20x Images-fiber counting.mat (1.14 KB)

Hi Corey,

Re: Counting filtered objects - Unfortunately, there is a bug in which FilterByObjMeasurement doesn’t do an image count. If you want that number as part of the output table, you will need to do the following:

  • ConvertToImage with the binary setting to change the objects to a binary image.

  • IdentifyPrimAutomatic to re-identify the objects and produce a count. Set all the “Discard…” options to No, the thresholding method to Other with a value of 0.5, and “Do not use” for “Method to distinguish clumped objects…”

The output here will include a per-image count of the remaining objects.

Re: solidity - Solidity is defined as the area of the region divided by the area of the convex hull of the region. The convex hull is the smallest polygon that encloses an object, so objects that are convex (such as a crescent shape) will have a low solidity. So it makes sense that the solidity will work well since these interstitial spaces tend to be convex in shape. FormFactor may also work since that measure is close to 1 for a circular object.

Regards,
-Mark