Merging objects

Hi guys, could you help me add a rule to my pipeline to merge two objects in close proximity into one? I realize that this should be possible under IdentifyPrimaryObjects, but whatever I try I still keep 2 distinct objects (see objects 11-12 and 17-19 in the 2nd attached image). I have tried several things: I set ‘method to distinguish between’ and ‘draw a line between objects’ to ‘intensity’ and then I varied both the ‘size of smoothing filter’ between 1-2000 and ‘suppress local maxima that are closer than this minimum allowed distance’ also between 1-2000 (since I read somewhere that this affects whether or not objects are merged) but to no avail. Am I doing something wrong here?

Some more info on my pipeline: I select objects, then shrink them to a dot, then expand them to the size of the outlines in the attached image (to later measure in a different channel).

Something else I tried is to unify objects based on proximity, but then even though they become one object, the outline follows both their centers as you can see in the first attached image.

Great program btw!

PS. is there an option to change from ‘dot’ to ‘comma’ as decimal separator when exporting to .csv? I have to replace them manually now to avoid excel interpreting the values incorrectly.




Would you mind posting your pipeline/project file plus the original input images that are given you this result?
-Mark

Sure, is a 7zip file ok? Then see attached
CPBas.7z (2.26 MB)

It seems to me that you should change your thresholding method to something more lenient (such as) and/or decrease the threshold correction factor accordingly, so these adjacent objects get detected as one. For example, changing to Otsu, 3-class with the middle class set to Background and a threshold correction factor of 1 seems to do the trick, once you increase the allowed size limits accordingly (for example, from 15 to 100).
-Mark

Thanks Mark, I’m away for a few days but I’ll give it a try when I get back

Hi Mark, that seems to work indeed thanks! I’m sorry to bother you again though, because I repeated my experiment with a different setting on the microscope: z-stack projection of ‘sum’ instead of ‘max intensity’ (I believe this is the only difference), and now I am suddenly unable to identify any objects at all. Both with these new settings and automatic thresholding. I tried playing with diameter size and also some values in manual threshold (although I’m not really sure how this works so I might not have tried the right values) and other things, but I either get 1 big object or none at all… I attached an image, it would be great if you could take a look at it.
STARV_20_LUC_02_01_R3D_D3D_PRJ.7z (3.3 MB)

OK, I can see what you mean. It appears that CP is not doing the usual scaling from 0 to 1, for some reason. We’ll have to see if it’s a bug.

in the meantime, the workaround is to add a RescaleIntensity module before any Identify modules, and rescale the images back to 0 to 1, with the following settings:

  • Select the input image
  • Select “Stretch each image to use the full intensity range” for rescaling method

Keep in mind that the images min and max may be different between images, depending on your assay/acquisition parameters. So if you use a manual thresholding value in the Identify module, it may not work from image to image.

Regards,
-Mark

Thanks Mark, that seems to work indeed.

I don’t know if it helps or if it’s something you would expect, but I don’t have to use rescale intensity to actually measure the intensity further on in my pipeline, but only for identifying these objects.

If I can help in any way looking for a possible bug just let me know.

From our software engineer:

[quote="LeeKamentsky "]The .dv data is being read as floating point values by Bio-formats and apparently, the scale is 0 to 65535, but there is no metadata to indicate that. There is some metadata that gives the max value per channel, but that’s just the maximum seen in the image, not the scaling value.

I can’t think of any generic way to determine the scale - I’d suggest using ImageMath to scale each channel (possibly after using ColorToGray to separate them) by multiplying by 0.0000152590 (= divide by 65535).[/quote]

Will the scale be the same for every image after I do this or is there a possiblity that CP assigns different scales for different images after opening them?

If you take LeeKamentsky’s approach, then since you are dividing by a constant, the scaling will be the same in every case.
-Mark