Ilastik - Identifying Double Labeled Cells

Hi everyone,
I’m using Ilastik to look at double labeled cells (DAPI + Alexa 594). I’d like to train the program to recognize only the cells that have both DAPI and AlexaSnap-953_c1.tif (4.9 MB) Snap-953_c1 2.tif (9.5 MB) Snap-953_c2.tif (4.2 MB) . I figure that first I should get the Ilastik to recognize the DAPI-labeled nuclei followed by using these as seeds to identify the Alexa-labeled cells, but does anyone know how to do this - or if that is indeed the best way to do it? Thanks in advance!

your images are way over saturated, turn down the exposure so you only have a few pixels maxing out. Also, work with your images as 8 or 16 bit grayscale, not RGB.

“The best way to do this?” That’s a loaded question. Here’s one way:

First use ImageJ to convert your image to 8-bit:
Screen Shot 2020-03-18 at 8.28.26 PM

then threshold or auto threshold:
Screen Shot 2020-03-18 at 8.29.00 PM

mean is good enough for this, other methods might work better:
Screen Shot 2020-03-18 at 8.29.20 PM

do a bit of morphological processing, first open:
Screen Shot 2020-03-18 at 8.29.40 PM

then watershed to split clumps:
Screen Shot 2020-03-18 at 8.29.52 PM

save it as a tif:
Screen Shot 2020-03-18 at 8.30.24 PM

also convert the Alexa image to an 8-bit tiff (not shown).

now open ilastik and select “Object Classification… Segmentation”

Add the Alexa one in the Raw Data tab:

and the binary image in the Segmentation tab:

select all the features except for location (see below for more explanation):

now label some positives (use the eyes in the left panel to turn layers on or off):

label some negatives:

now hit the “live” button to see objects get predicted as to their possible value:

continue to add and subtract labels till it seems to work correctly. You can output the results in a table and add more unlabelled images for batch processing. You can also output the labelled image to take back into ImageJ or some other program for further processing.

Good luck! -John

2 Likes

Hi @johnmc, @Philippe_D_Onofrio

nice reply! I just would add a word of caution for adding all features - this is a little dangerous. There especially the “location” features introduce absolute pixel coordinates into the feature array. Those only make sense if the position in Cartesian coordinates has any significance for the classification. Most of the time this is not the case. There is a shorthand button that reads “All excl. Location”. So if you really don’t want to think about the features you are using, then at least click this button.
This will still produce a lot of features. Keep this in mind when annotating. With each click to label a cell to belong to a certain class, you add one training data point. With that many features you will need many annotations (greater than the number of features). In pixel classification each brush stroke adds many data points (one for each painted pixel), but in object classification it’s one data point per object. This is why it usually makes sense to think about which features to use.

For the particular case I would potentially look at the Intensity features.