Cellprofiler 3.0 Watershed 3D algorithm overview request

cellprofiler
3d
watershed

#1

Would a dev be kind enough to provide a very high level overview about how Watershed works in 3D mode? Let’s say the input is one stack of binarized images, representing the output of the Threshold module on nuclei images acquired on multiple Z planes. When Watershed is applied on the binarized images, is segmentation performed on each plane in the stack separately before some kind of image registration is performed to find/connect single objects across multiple planes? Or are multiple planes considered simultaneously to perform the initial segmentation? Or is it some other approach entirely?

Thanks much in advance!


#2

Hi,

I believe all the planes are considered simultaneously. Please let me know if you have any follow up issues!


#3

I agree. But I would love to hear an explanation of the basic idea of how it works, we could update the documentation to explain. @mcquin or @allen_goodman care to chime in?


#4

Here’s the relevant documentation from skimage, whose implementation we use to run this.


#5

Thanks Beth for the clarification and the link! However, that link seems to just describe Watershed for the 2D case. Would you be willing to elaborate a bit further on how multiple planes are considered and linked for segmentation in 3D?


#6

The whole volume is considered at once, not planewise- the strategy is identical in 2D or 3D. I’ll describe the strategy, then explain the key point more thoroughly


First the image is converted to a binary mask- is this pixel “on” or “off”? Typically you do this based on thresholding the fluorescence intensity.

Then, each voxel that’s “on” (pixel plus plane) is considered for how far it is in 3D distance to a voxel that’s “off”. That voxel is then assigned an intensity of that distance (this is called a distance transform and is visualized as “Distances” in the skimage link).

Next, you look for the highest values in your distance transformed image- assuming your objects are roughly spherical, the voxels that are farthest from the edge (aka have a highest distance from the “off” voxels) will be the center of your objects. We’ll identify each maximum distance and call it a seed.

Finally, you start from the seeds and push outward, adding voxels to each object as we go, until all “on” voxels have been assigned to some object.


The only difference is in the calculation of the distance transform, which is done by this scipy function- in 2D, you just calculate the distance to a boundary in X and Y. In 3D, you calculate the distance in X, Y, and Z. Otherwise, the algorithm is identical.

Did that help at all?


#7

Thanks Beth! Yes, that’s super helpful and descriptive.


#8

May I ask whether one can control over-segmentation?

Something like suppress minima closer than …?


#9

Hi Christian,

Not directly at the moment, but see here.


#10

For the record, we wish that IdentifyPrimaryObjects worked in 3D but that module is pretty beastly so that is why we have the bits and pieces of its functionality as different modules for the time being, until someone has the enthusiasm for upgrading the module’s full functionality.