QuPath + StarDist: input and filter must have the same depth: 3 vs 1

Hey there, it’s me again with another, possibly stupid, quick question: After having fixed all the tensorflow errors of my StarDist Setup I now get this message:

INFO: Loaded TensorFlow bundle: /Users/tm/Documents/models2/stardist/TF_SavedModel, (inputinput:0 [-1,-1,-1,1], output=concatenate_4/concat:0 [-1,-1,-1,33])
ERROR: RuntimeException: input and filter must have the same depth: 3 vs 1
	 [[{{node conv2d_1/Relu}}]]

ERROR: qupath.tensorflow.TensorFlowOp$TensorFlowBundle.run(TensorFlowOp.java:314)
    java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source)
    java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)
    java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
    java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
    java.base/java.util.stream.ReduceOps$ReduceTask.doLeaf(Unknown Source)
    java.base/java.util.stream.ReduceOps$ReduceTask.doLeaf(Unknown Source)
    java.base/java.util.stream.AbstractTask.compute(Unknown Source)
    java.base/java.util.concurrent.CountedCompleter.exec(Unknown Source)
    java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source)
    java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown Source)
    java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source)
    java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source)
    java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source)

So the main error is that the depth size doesn’t match. In qupath I have found this:
“Images within QuPath can have different numbers of channels and various bit-depths, but when they are displayed they generally need to be rendered as 3-channel, 8-bit RGB.” So my qupath-images have a depth of 3, then? This is my image:

Meanwhile in my Jupiter notebook I find this: unet_n_conv_per_depth=2, unet_n_depth=3 So that is 3 as well, right? Or do I look at the wrong parameter?

So how can I resolve this? Thanks in advance for any suggestions!

Independent of rendering, it looks like the raw data of that image is in RGB format – so it will have 3 input channels.

I don’t think the parameters you mention are the ones you’ll need, but rather n_channel or n_channel_in (haven’t tried it, just looked quickly here).

1 Like

You seem to be using a model that expects one channel as input and you are feeding it a three channel image. Can you post the script you are trying to run?

EDIT: the “Depth” here is not related to Bit Depth but rather to the depth (3rd dimension) of your input dataset.

1 Like

Okay, I didn’t know that - so thanks for a start :wink:

I have reverted all my modifications to make sure that it wasn’t a stupid mistake in the code. So the error also occurs when I use your example notebook. After @petebankheads reply I tried the following to change the input channels

# 32 is a good default choice (see 1_data.ipynb)
n_rays = 32
# Use OpenCL-based computations for data generator during training (requires 'gputools')
use_gpu = False and gputools_available()
# prevent error message, set train_patch_size to < lowest image patch_size
train_patch_size = (128,128)
# Predict on subsampled grid for increased efficiency and larger field of view
grid = (2,2)
# for playing around ~5, for training ~400
train_epochs = 5
n_channel_in = 3

conf = Config2D (
    n_rays       = n_rays,
    grid         = grid,
    use_gpu      = use_gpu,
    n_channel_in = n_channel_in, #ORIGINAL: n_channel_in = n_channel,
    train_patch_size = train_patch_size,
    train_epochs = train_epochs,

as well as modify the n_channel = 1 to = 3 at the beginning of the notebook

but got that: Error when checking input: expected input to have shape (None, None, 3) but got array with shape (128, 128, 1)

I have an example image + mask appended. I generated the images with the script at the end of the post written by you and your colleagues (thanks a million for that!). The images used for training are of exactly the same format as the test images. Any idea?

edcb929c-1244-4102-99f3-c463e2a287a2_r5.tif (296.2 KB) edcb929c-1244-4102-99f3-c463e2a287a2_r5.tif (592.3 KB)

Exports Annotations for StarDist (Or other Deep Learning frameworks) 

You will need to install the BIOP Extension for QuPath which contains methods needed to run this code

You need rectangular annotations that have classes "Training" and "Validation"
After you have placed these annotations, lock them and start drawing the objects inside

The script will export each annotation and whatever is contained within as an image-label pair
These will be placed in the folder specified by the user in the main project directory.
Inside that directory, you will find 'train' and 'test' directories that contain the images with 
class 'Training' and 'Validation', respectively. 
Inside each, you will find 'images' and 'masks' folders containing the exported image and the labels, 
respectively. The naming convention was chosen to match the one used for the StarDist DSBdataset

- channel_of_interest: You can export a single channel or all of them, currently no option for _some_channels only
- downsample: you can downsample your image in case it does not make sense for you to train on the full resolution
- export_directory: name of the directory which will contain the 'train' and 'test' subdirectories

Authors: Olivier Burri, Romain Guiet BioImaging and Optics Platform (EPFL BIOP)

Tested on QuPath 0.2.0-m11, May 6th 2020

Due to the simple nature of this code, no copyright is applicable

def channel_of_interest = 0 // null to export all the channels 
def downsample = 1


def training_regions = getAnnotationObjects().findAll { it.getPathClass() == getPathClass("Training") }

def validation_regions = getAnnotationObjects().findAll { it.getPathClass() == getPathClass("Validation") }

if (training_regions.size() > 0 ) saveRegions( training_regions, channel_of_interest, downsample, 'train')

if (validation_regions.size() > 0 ) saveRegions( validation_regions, channel_of_interest, downsample, 'test')

def saveRegions( def regions, def channel, def downsample, def type ) {
    // Randomize names
    def is_randomized = getProject().getMaskImageNames()
    def rm = RoiManager.getRoiManager() ?: new RoiManager()
    // Get the image name
    def image_name = getProjectEntry().getImageName()
    regions.eachWithIndex{ region, region_idx ->
        println("Processing Region #"+(  region_idx + 1 ) )
        def file_name =  image_name+"_r"+( region_idx + 1 )  
        imageData = getCurrentImageData();
        server = imageData.getServer();
        viewer = getCurrentViewer();
        hierarchy = getCurrentHierarchy();

        //def image = GUIUtils.getImagePlus( region, downsample, false, true )
        request = RegionRequest.createInstance(imageData.getServerPath(), downsample, region.getROI())
        pathImage = null;
        pathImage = IJExtension.extractROIWithOverlay(server, region, hierarchy, request, false, viewer.getOverlayOptions());
        image = pathImage.getImage()
        println("Image received" )
        // Create the Labels image
        def labels = IJ.createImage( "Labels", "16-bit black", image.getWidth(), image.getHeight() ,1 );
        IJ.run(image, "To ROI Manager", "")
        def rois = rm.getRoisAsArray() as List
        println("Creating Labels" )
        def label_ip = labels.getProcessor()
        def idx = 0
        rois.each{ roi ->
            if (roi.getType() == Roi.RECTANGLE) {
                println("Ignoring Rectangle")
            } else {
                label_ip.setColor( ++idx )
                label_ip.setRoi( roi )
                label_ip.fill( roi )

        labels.setProcessor( label_ip )
        // Split to keep only channel of interest
        def output = image
        if  ( channel != null){
            imp_chs =  ChannelSplitter.split( image )
            output = imp_chs[  channel - 1 ]
        saveImages(output, labels, file_name, type)
        println( file_name + " Image and Mask Saved." )
        // Save some RAM
    // Return Project setup as it was before
    getProject().setMaskImageNames( is_randomized )

// This will save the images in the selected folder
def saveImages(def images, def labels, def name, def type) {
    def source_folder = new File ( buildFilePath( PROJECT_BASE_DIR, 'ground_truth', type, 'images' ) )
    def target_folder = new File ( buildFilePath( PROJECT_BASE_DIR, 'ground_truth', type, 'masks' ) )
    mkdirs( source_folder.getAbsolutePath() )
    mkdirs( target_folder.getAbsolutePath() )
    IJ.save( images , new File ( source_folder, name ).getAbsolutePath()+'.tif' )
    IJ.save( labels , new File ( target_folder, name ).getAbsolutePath()+'.tif' )


// Manage Imports
import qupath.lib.roi.RectangleROI
import qupath.imagej.gui.IJExtension;
import ij.IJ
import ij.gui.Roi
import ij.plugin.ChannelSplitter
import ij.plugin.frame.RoiManager
print "done"

I have also filed an issue on Github where you can see the complete error message of the jupyter notebook when I change the input channels. If you want I can remove the issue, but I am not always sure that you StarDist guys are active on image.sc. :wink: So really thank you so much for your advice!

From what I see here everything is working properly. You have exported a one channel image and the error you see when training makes sense. You have a one-channel tiff and you are telling StarDist that you will train from a 3-channel input, hence the error.

So my interpretation of that is that your initial training image export should be amended to export a 3 channel image.

If you are using our little export script from Export QuPath Annotations for StarDist Training, setting the channel_of_interest to null should export all 3 channels.


Please don’t discuss a StarDist problem on the forum and open an issue in the GitHub repository. I’m subscribed at all stardist-tagged posts on the forum and will eventually get to it (if @mweigert doesn’t beat me to it).

Discussing on the forum has the obvious advantage that other knowledgeable people can solve the problem before we developers can do it. This thread is a great example for that.



Thanks for the feedback - you are absolutely right!

As always it was indeed as easy as changing “0” to “null” and now it works. Thank you for your advice!

Hello, I am having a similar issue to this post however not on a training image, just on an actual image. I have a 4 channel image and want to run Stardist on the DAPI channel. It gives me this error:

INFO: Loaded TensorFlow bundle: /Users/KylieC/Documents/Documents - Kylie’s MacBook Pro/Protocols/QuPath scripts/dsb2018_heavy_augment, (inputinput:0 [-1,-1,-1,1], output=concatenate_4/concat:0 [-1,-1,-1,33])
ERROR: RuntimeException: input and filter must have the same depth: 4 vs 1
[[{{node conv2d_1/Relu}}]]

@Kylie You need to specify which channel should be used, for example

def stardist = StarDist2D.builder(pathModel)
        .threshold(0.5)              // Probability (detection) threshold
        .channels('DAPI')            // Specify detection channel (THE IMPORTANT BIT HERE!)
        .normalizePercentiles(1, 99) // Percentile normalization
        .pixelSize(0.5)              // Resolution for detection

This will only work if your channel is called ‘DAPI’ – but you should be able to input a another name (or number) instead. If using a number, be careful that it might expect the first channel to be 0 (I can’t actually remember if it’s 0-based or 1-based, but I think 0… that’s why channel names are preferred).

The code for this is at StarDist — QuPath 0.2.3 documentation

1 Like

Ah yes - I didnt notice this line was missing from the command!! Sorry! :woman_facepalming:
That is it running now - thank you :blush: