Segmentation of individual tiles and region merging after stitching

Hi all,
I am working on larges images with multiples nucleus markers including DAPI.
I use the DAPI signal for segmentation using #stardist or #cellpose. Unfortunately, I could not segment the stitched image directly (too large) and I have to work with individual tiles in parallel before stitching.
The main issue I have is that a lot of cells are split between adjacent images and are counted twice (or more if at a corner between 4 images).
I am then wondering how people usually deal with this region merging problem?
Thank you for your time and help.

I don’t know about cellpose, though it would be nice, but have you considered running StarDist on the whole image?

Thank you for your answer @Research_Associate. It looks very cool the integration with #qupath, I will definitely try that.
I do wonder though, in the video, it seems that #qupath will first split the image into chunks before doing the segmentation with stardist. If it is the case, I do wonder how they deal with adjacent regions?
In the instruction it says:
QuPath will overlap the regions and then try to resolve cells detected on region boundaries to avoid weird artefacts in these areas (e.g. cells being cut in half, or detected twice).

1 Like

I haven’t used it myself but am aware of some discussions on tiled labelling using dask_image.
See the discussions in this pull request and this issue: . If you’re working with python that might be an option.

@jni may be able to provide some more info.

Not 100% sure, but I suspect it is an extension of StarDist itself. The tiles for cell detection overlap a certain percentage, I think, and within that area it should resolve objects detected in either adjacent tile. As usual, the highest probability objects “win.”

The tiled labeling only works for perfect matches, as is the case of connected components in a bool volume. However, the principle is general and should work with overlap quantities between adjacent label volumes and thresholding of said quantities.

For more information about what QuPath is doing with StarDist specifically see Stardist extension and also

QuPath does not permit overlapping nuclei. Rather, it handles overlaps by retaining the nucleus with the highest prediction probability unchanged, and removing overlapping areas from lower-probability detections - discarding these detections only if their area decreases by more than 50%.

This resolution is done with the help of Java Topology Suite, based upon contours - not the rasterized image. One of the advantages of StarDist (and perhaps CellPose?) is that the output is already in the form of vertices, which gives less severe boundary artifacts than a segmentation that gives a binary/labelled image output.

1 Like

Thank you @petebankhead and @Research_Associate for your answers on #qupath. I still have a question though, in general how does qupath resolve cells detected on region boundaries?

If you are no longer talking about tiling and instead have created artificial boundaries from annotations (or only want to extract a single tile to look at, which is another annotation), then it simply chops off at the border. You will get half nuclei if half the nuclei is visible and above all of the threshold cutoffs.

As far as I know, there is no automatic removal of nuclei touching annotation borders, though it could be scripted. Pete posted a similar script for annotations earlier in the week.

@bioimage-analysis I’m not sure I understand exactly what you’re asking regarding QuPath, but as the person who wrote all the code I should be in the best position to answer :slight_smile:

For annotation boundaries it is assumed that detections can simply be cropped; you can always expand the annotations if needed.

For tiles generated automatically within large annotations, what happens does somewhat depend upon the exact command, but some of the logic used in QuPath is here:

It’s not terribly elegant and is something I plan to revisit in the future.

One ‘easy’ general rule when it comes to counting can be to keep all objects touching the top/left boundary of a tile and discard all objects touching the right/bottom. Some variation of this might work for segmentation as well, but it may be tricky to tune it to get the right amount of overlap relative to the object size.

It’s rather a lot simpler if everything is computed locally and so you can ‘guarantee’ that the same pixels will be detected regardless of the starting point (subject to enough tile padding). This unfortunately isn’t always the case with QuPath’s built-in cell segmentation because of the way it does background subtraction (using opening by reconstruction), which is why it ends up with the more complicated method above. But that is not the logic QuPath uses with stardist… which is code I wrote only in the last couple of weeks and which is better described in the links I posted earlier.

1 Like

Thank you and I’m sorry I was not clear, but your answer was what I was looking for.

FYI: In the experimental branch of stardist there is already functionality that provides implicit stitching of results from adjacent tiles. Similarly as @petebankhead described, each tile’s regions are predicted including an overlap and then propagated along a predefined adjacency direction (right/bottom). For that to work, all objects need to be smaller than the overlap, which in most cases can be estimated beforehand.



Hello @mweigert, could you tell a bit more about this tiling/stitching feature in the experimental Stardist branch? Which branch is it? How can we use it? Can we use it “from outside” i.e. use only the stitching part of this code after having sliced and segmented the WSI separately?

In our case, the images are too big to fit in memory, we would need to proceed tile by tile.

Thanks a lot!