How is the NumberOfNeighbors measurement calculated

Hi All

I wonder if someone could tell me a little about how the NumberOfNeighbors measurement is arrived at?
If I run MeasureObjectNeighbours, and select expand until adjacent as a means of determining neighbors, what distance is used to calculate the number of neighbors? Is it a function of the nearest neighbor? Or more complex than that?

Cheers in advance,

Hi Paul,

Starting with the Help:
“Expand until adjacent: The objects are expanded until all pixels on the object boundaries are touching another. Two objects are neighbors if their any of their boundary pixels are adjacent after expansion.”

This means that each object is subjected to a morphological operator dilate (or expand), i.e. grow out each object’s boundary one pixel at a time, and in the case of Expand Until Adjacent, continue to do this until there are no empty pixels left in the

You should end up with a figure like this. In a simple hexagonal case, most objects would thus have 6 neighbors like this.

I think many users assume we use center-to-center distances in MeasureObjectNeighbors, but objects have different shapes, so to take this into account we utilize morphological expansion which effectively measures (more or less) the nearest edge-to-nearest edge distance.

Hope that helps!

Hi David

Thanks for the reply - didn’t notice it until now (not that it wasn’t incredibly urgent :wink: )

I was mistakenly under the impression that the Neighbous measurements were taken from the object centroid. I suppose that it’s also less computationally expensive to expand?

In my case, I’m looking at the relative distribution of nuclei across a sample, but I don’t want to simply chop the sample up into little squares - I’d like to know the number of nearby cells so that I can make some nicer plots using real images…

For example, I have the following image

Which I can detect and measure neighbours

What I would like to do is to colour each nuclei based on the number of cells in, e.g. a 200 px radius, rather than the current colouring which is based on expand until adjacent. As the nuclei are very simple shapes, it would be ok to use the centroid of each object.

So far, I’ve tried reducing the objects to a point, then find neighbours within 200px. Profiler freezes every time I do this, and I assume that’s because there are so many objects in an image? Could you point me towards a better way of doing this if you have any ideas?

Also, as I have an array of images captured across a sample, the scale of each image varies depending on the distribution within it. Would it be possible to use a fixed scale, e.g. 0-20, rather than fit the scale in each image? I ultimately want to tile my images back together, so the scale must be uniform (I could rescale them in matlab afterwards, which I think might be the quickest way to go)


P.S. Just switched to the trunk build - hats off to all of you as the improvements are really fantastic, and it’s running faster than ever with very few crashes
Locations.cppipe (13.8 KB)

Hi Paul,
Glad that you like the new version!

As for the neighbor algorithm you desire, it sounds like an instance of “Fixed-radius_near_neighbors” … _neighbors. As discussed, this is not how MeasureObjectNeighbors is designed, yes, largely because of speed/computation considerations, but also because we don’t have a data model of overlapping objects. (We have thought about this, but it is in the long-term future :smile: ) Morphological dilation can be done much faster.

However, you might try this pipeline hack, that I just thought of (project attached, and note that I used the most recent 2.1 trunk build). It works like this:
(1) Shrinks nuclei to a point (as you had done)
(2) Convert these points to a binary image
(3) “Smooth” these binary points using “Circular Average Filter” with a radius the size that you want to include as neighbors. Basically you are converting each point to a circle with an total intensity of 1, but spread over the whole area of the circle. But overlapping areas, i.e. neighbors, will sum their intensities.
NOTE: A radius here greater than ~50 pixels will likely crash, running out of memory, so that is a restriction to this method.
(4) Measure the intensities at each centroid with MeasureObjectIntensity.

You will have to convert your intensities to # of neighbors by using Intensity=1/(pi(Diameter/2)^2). You could do this with a CalculateMath module, but I didn’t include that.

Let us know if that works for you!
project_near_neighbors2.cpproj (265 KB)

Hi David

I like your way of expanding the objects and adding the pixels intensity value. If I could expand the objects and colour them based on their single pixel intensity that would be ideal, but I don’t think the colormap options in ConvertObjectsToImage can do this for me - colourmaps are generally randomly assigned, or based on object number…
A faster way to do this might be to
i) detect nuclei and shrink to a point
ii) expand these points to the desired measurement radius
iii) measure the number of tertiary objects within the measurement radius for each expanded object (using the original points as the tertiary objects)

However this would also make it tricky to create the heatmap style images I’m so keen on, but it would allow measurement of nuclei within a given radius.

I actually managed to do the measurements with the pipeline I uploaded before, however it took ~20 hours to process 96 images, shrinking the nuclei then detecting cells within a radius of 50 px.
Here is the original image montage;

And here is my ‘heatmap’ which I’m very happy with for a starter

There are still issues to be ironed out such as the fact that the colour scale in each image is different, so looking at the montage of the whole well the scale does not match. I could of course rescale my images in matlab later so that’s perhaps not too much of a problem.
Objects at the edge of images also only register half of the local cells, obviously, so I get an obvious artefact from processing the images as batches (although there’s certainly no way I could process a huge image of the full well). Taking overlapping images then combining cropped versions may be my best course of action in ironing these out.

As you can see, I had to just expand my centroid objects into circles, but that’s not a huge problem as showing the actual nucleus shape isn’t necessary at this resolution. I really like how it’s looking, but the processing time is too excessive using this method (especially with such high resolution source images) since I’ll ultimately process over 1,000 images and would like to provide the pipeline to a masters student to work with on their own.

I thought that scaling the image down to a more workable resolution would be faster, but I’m struggling to carry my previously detected objects through to the new image - it seems like I’ll lose the least data if I detect the objects at high resolution, then do the detection of fixed radius nearest neighbours on a lower resolution image. If, for example, I do the following (pipeline attached):
Shrink my detected nuclei to a point

Expand these points to circles with radius 4

Convert these new objects to images

Resize the image to reduce resolution, should basically create new points, as long as no objects overlap

I get the following error when I try to detect these new smaller objects;

Traceback (most recent call last):
File “cellprofiler\gui\pipelinecontroller.pyc”, line 2482, in do_step
File “cellprofiler\modules\identifyprimaryobjects.pyc”, line 874, in run
File “cellprofiler\modules\identify.pyc”, line 737, in threshold_image
File “cellprofiler\cpimage.pyc”, line 401, in getattr
File “cellprofiler\cpimage.pyc”, line 214, in get_mask
File “cellprofiler\cpimage.pyc”, line 288, in crop_image_similarly
RuntimeError: Images are of different size and no crop mask available.
Use the Crop and Align modules to match images of different sizes.

Do you know if this is a bug? I suppose this is a slightly cumbersome way of doing this, as I could just rescale the x and y coordinates of my nuclei using the math module, then expand them on a smaller image?

Thanks for the help,

Seed.002.cpproj (282 KB)

I get the following error when I try to detect these new smaller objects;
Do you know if this is a bug? [/quote]

It does appear to be a bug. I’ve filed a bug report on GitHub, with a slightly different error (but same root cause I think). I can post when it’s fixed.

Thanks for the bug report. The problem is a bug we have in our Resize module. You’re cropping the image, then resizing it, but we don’t readjust some other things that go along with it (the mask and the cropping) when you resize. Unfortunately, cropping, then resizing seems to prevent you from reusing the image further down in the pipeline in just about any module - I’m afraid the only work-around might be to run one pipeline to save resized and cropped images and then use a second one to fix them. I’ll probably get a fix into CellProfiler for this today. If you want to track progress, it’s . The fix will be included in our upcoming release.

(PS: fix is complete and will be available in the next trunk build and release)

Just a note to say that I can now do exactly what I want (for now!) thanks to the bug fix. I have a pretty fast pipeline up and running to measure cells within a given radius. Thanks for the helps as always guys!

Great, glad to hear things are working out!

This issue has been resolved in CellProfiler 2.1 and later, new releases of which can now be downloaded from