Multichannel RNAscope Analysis with Qupath

Hello,

We are trying to determine whether qupath can be used for a specific type of image analysis. We wanted to reach out to see if any users of this forum had any recommendations. Below is a brief description of the analysis. We will attach an example image at the bottom of the post. Thanks in advance for any help you can provide!

All images scan the Z-plane of a neuron. Each image contains 20-40 z-slices and 4 different channels. Channels 1,2,4 represent different fluorescently labelled RNA molecules. Channel 3 shows an antibody labeling for a neuronal marker, which provides a visual of the cell body. All images are 8 bit, gray-scale and captured with a fluorescent microscope.

Ultimately, we would like to calculate the number of fluorescently labeled RNA molecules (sometime referred to a puncta) present within the neuron. This can be done in FIJI by first determining the location of the cell body within the channel 3 and creating a ROI. We then select an appropriate threshold for the images. The ROI and the threshold are then applied to channels 1,2, and 4. Finally, we run the “analyze particles” function to quantify the number of molecules present within the ROI in channels 1, 2, and 4. However given the number of images within our data set, we are interested whether this process could be augment/automated using qupath or another program listed on this forum (if we are looking in the wrong place, any direction would be appreciated).

There are a few posts that deal with somewhat similar challenges related to analyzing images that contain RNA molecules, here and here, among several others. These post suggest that qupath may be a good fit for what we are trying to do. Again, thanks for any help.

Below is an example image with channels split:

Just to clarify, are you trying to do 3D analysis of ISH spots that will exist in multiple layers, and would pretty much need voxel based volumes? Or are your Z slices far enough apart that any spot won’t exist in any other slice, and you are simply summing across the slices?

Also, do you have the original image/volume? Splitting it into 4 different files won’t make things easy in QuPath.

Yes, the Z slices are far enough apart to avoid this. If I understand your second question correctly, then summing across the slice would work, as long as we could determine the number of RNA molecules present within the ROI as well.

Here is a zip file of the original image, example_1.zip (6.7 MB).

Hmm, so first off, your Z slices are definitely close enough together that the same spots are showing up in at least 3 frames at a time. I don’t have a great way at the moment to deal with that in QuPath, though you might be able to check, to a certain extent, whether the spots overlap between slices, and if so remove the smaller spot. I haven’t really done any 3D coding in QuPath, but right now you are dealing with objects that do stretch between slices, and will need to find some way to handle that.

I’ll need some help from @petebankhead @melvingelbard. I can create a cell object throughout the Z stack fairly easily using the simple thresholder, but I’m unable to figure out how to target Subcellular detections to use that outline.

On the other hand, if you are conversant with ImageJ macro coding, you might be able to call an ImageJ script on the ROIs generated by the Simple thresholder.



You also may or may not want to dilate the cell ROI as some of the ISH spots clearly fall outside the blue boundary.

Slight code example, if you want to cycle through all slices and run cell detection, you can use something like this:

setImageType('FLUORESCENCE');
for (i=0;i<22;i++){
createSelectAllObject(true, i, 0);
}
selectAnnotations();
runPlugin('qupath.imagej.detect.cells.WatershedCellDetection', '{"detectionImage": "Channel 3",  "requestedPixelSizeMicrons": 0.5,  "backgroundRadiusMicrons": 0.0,  "medianRadiusMicrons": 3.0,  "sigmaMicrons": 3.0,  "minAreaMicrons": 1000.0,  "maxAreaMicrons": 100000.0,  "threshold": 10.0,  "watershedPostProcess": true,  "cellExpansionMicrons": 1.0,  "includeNuclei": true,  "smoothBoundaries": true,  "makeMeasurements": true}');

But I wasn’t able to use cell detection successfully to make it work for enough of the slices.

I don’t think QuPath is the right starting point for this, since it is 3D and quite removed from anything QuPath has been designed for to date. It might very well be an application QuPath can handle in the future, but the algorithms and code still need to be written… and this is not a small task.

I’d suggest posting a broader question looking for suggestions from any software. The tags with the post help determine who will see it (currently the only tag is ‘qupath’).

1 Like

I don’t know how to solve the issue of objects stretching between slices in QuPath, but FIJI + TrackMate can be scripted to handle the 3D spot segmentation problem fairly well (although I assume actual 3D-specific segmentation plugins will be even more efficient).

The following .tif shows the segmentation overlay for the C4-example after detection, with the colour gradient corresponding to Z-stack length (avoiding repeated quantification of multiple Z instances of the same spot). Is this along the lines of what you need?

C4_segmented.tif (7.5 MB)

2 Likes

@Research_Associate and @petebankhead, Thanks for your responses, seems like we need to do a bit more thinking about our analysis. Posting the question with more general tags is a great starting point.

@pedrolmoura, Thanks for the recommendation. This approach looks promising, we’ll be sure to check it out.

1 Like

@Mcnulty, the following Jython script (adapted from several parts of https://imagej.net/Scripting_TrackMate to suit your image after pre-processing, i.e. mask generation + 32-bit multiplication to define ROI) should be a decent starting point if you’d like to delve further into TrackMate. My approach would be to then iterate the detection process over each channel, but again, someone has probably already developed a more efficient way to deal with this problem. Hope it helps, anyway!

import ij.IJ
import ij.WindowManager
from fiji.plugin.trackmate import Model
from fiji.plugin.trackmate import Settings
from fiji.plugin.trackmate import TrackMate
from fiji.plugin.trackmate import SelectionModel
from fiji.plugin.trackmate import Logger
from fiji.plugin.trackmate.detection import LogDetectorFactory
from fiji.plugin.trackmate.tracking.sparselap import SparseLAPTrackerFactory
from fiji.plugin.trackmate.tracking import LAPUtils
import fiji.plugin.trackmate.visualization.hyperstack.HyperStackDisplayer as HyperStackDisplayer
import fiji.plugin.trackmate.features.FeatureFilter as FeatureFilter
import fiji.plugin.trackmate.visualization.PerTrackFeatureColorGenerator as PerTrackFeatureColorGenerator
import sys
import fiji.plugin.trackmate.features.track.TrackDurationAnalyzer as TrackDurationAnalyzer
import fiji.plugin.trackmate.features.track.TrackIndexAnalyzer as TrackIndexAnalyzer

# Get currently selected image
imp = ij.WindowManager.getCurrentImage()

#Create model object    
model = Model()
    
# Send all messages to ImageJ log window.
model.setLogger(Logger.IJ_LOGGER)

# Swap Z and T dimensions if T=1
dims = imp.getDimensions() # default order: XYCZT
if (dims[4] == 1):
    imp.setDimensions( dims[2],dims[4],dims[3] )

# Get the number of channels 
nChannels = imp.getNChannels()

settings = Settings()
settings.setFrom(imp)
settings.addTrackAnalyzer( TrackDurationAnalyzer() )
settings.addTrackAnalyzer( TrackIndexAnalyzer() )


# Configure detector - We use the Strings for the keys
settings.detectorFactory = LogDetectorFactory()
settings.detectorSettings = { 
    'DO_SUBPIXEL_LOCALIZATION' : True,
    'RADIUS' : 0.5,
    'TARGET_CHANNEL' : 1,
    'THRESHOLD' : 3000.0,
    'DO_MEDIAN_FILTERING' : False,
}  

# Spot tracker.
settings.trackerFactory = SparseLAPTrackerFactory()
settings.trackerSettings = LAPUtils.getDefaultLAPSettingsMap()
settings.trackerSettings['MAX_FRAME_GAP']  = 1
settings.trackerSettings['LINKING_MAX_DISTANCE']  = 0.1
settings.trackerSettings['GAP_CLOSING_MAX_DISTANCE']  = 0.1
 
# Run TrackMate and store data into Model.
model = Model()
trackmate = TrackMate(model, settings)

ok = trackmate.checkInput()
if not ok:
    sys.exit(str(trackmate.getErrorMessage()))
    
ok = trackmate.process()
if not ok:
    sys.exit(str(trackmate.getErrorMessage()))

# Prepare display.
sm = SelectionModel(model)
color = PerTrackFeatureColorGenerator(model, 'TRACK_DURATION')
 
# Create hyperstack viewer to overlay detection
view = HyperStackDisplayer(model, sm,imp)

# Display tracks as comets
view.setDisplaySettings('TrackDisplaymode', 1)
view.setDisplaySettings('TrackDisplayDepth', 20)
view.setDisplaySettings('TrackColoring', color)
view.render()
1 Like