Context: I am new to microscopy and image analysis. I have become acquainted with ImageJ, Fiji, and Icy recently upon acquiring confocal images of a RNA FISH experiment I completed. My images are. .nd2 formatted and have three channels. The goal of my analysis is to answer (at least initially) the two following questions:
What percent of cells are GFP positive? (GFP is ubiquitously expressed in these cells shown above in green).
Of the cells expressing GFP, how many puncta (corresponding to my FISH probes) are present?
Currently I am attempting to use the Weka Segmentation plugin in Fiji to segment my cells in the green channel in order to ultimately create a binary image from the probability plot that I can use to extract ROIs for downstream analysis with a image merge of the red channel (probes are detected here).
- I began training on Weka and found initially good results. However, when I tried to apply my classifier to the exact same image I trained it on, I got a heap memory error randomly. This error was not present whenever I applied a classifier directly after training - only when I first load the classifier.
Error: “java.lang.OutOfMemoryError: Java heap space”
Error: “java.lang.OutOfMemoryError: GC overhead limit exceeded”
My images are 16-bit colored ~23 MB for a single channel (I do channel split a segment on the green channel alone). My computer uses a quad-core I5 Intel Processor and has 8 GB of RAM and the status bar shows that I have 3952 MB available (I’ve tried increasing this manually in edit>Options>Memory & Threads to about 90% of my RAM space but Fiji threw an error and to check the installation guide but path randomization is off, so I don’t understand the problem there).
The features I’ve used for all analyses thus far has only included Gaussian Blur and Difference of Gaussians.
Should I convert from 16-bit to 8-bit and from RBG to greyscale in order to accomplish this or can I make it work with what I already have?
- I was able to at times apply a classifier to another image. However, when this was done, the segmentation considered the entire image to be a single segment due to the large variance in autofluorescence I am assuming.
Should I attempt to train on multiple images across a range of autofluorescence or would the plugin not be able to discriminate my classes if the classes can appear under a wide range of pixel intensities although the ratio of intensities should remain more of less the same (e.g. background vs GFP+ cell)?
Or should I pre-process my images perhaps with rolling ball background subtractor in attempt to normalize my data set?
- I’ve read the manual, but the dissection of the features didn’t reveal to me how I might go about trying to discriminate objects for a class by size. How might I go about this?
Most features appeared to be texture, color, or intensity relevant. I’ve attempted to play around with my sigma values, increasing the min to a value of the minimum radius of a cell to avoid debris, but it did not work and I ended up with the below result still (small dots are unwanted):
Perhaps this kind of filtering would be best done post-segmentation?
Thank you for taking the time to read. Any and all help would be greatly appreciated! If missed anything please let me know! I am not knowledgeable enough yet to know what information is important to be shared for these kinds of questions.