Thanks for sending those. Looking at the pipeline it’s pretty clear that it’s running out of memory in a way that we might expect. For the first image set you have two sets of objects, containing 64000 and 46000 objects. In MeasureObjectNeighbors you then ask it to measure the distance between the two sets. Because of that the program wants to generate a distance matrix which would be 64000x46000 in size, which instantly maxes out your system memory. On Windows (and I expect older OSX versions) Python would then display a warning about being unable to allocate enough memory for an array of that size, I’m not sure why that’s not appearing here.
It’s possible that with some development we might be able to come up with a more memory efficient way to calculate neighbours, but for the time being this is somewhat expected. These object sets are extremely large and so I expect the processing time in other modules reflects this too.
I think the best approach here might be to try breaking the images down into tiles before processing.
As another bit of advice, it looks like you use an initial IdentifyPrimaryObjects module to just try to find the whole area of tissue. This is an extremely intensive module to use for this, so you may be better served by trying the Threshold module followed by RemoveHoles, then perhaps ConvertImageToObjects to generate your ‘tissue’ objects. Alternatively you could enable IdentifyPrimaryObjects’ advanced settings and disable the ‘separate touching objects’ options. This should save a considerable amount of time.