Issues with IdentifyObjectsManually Module

Hi folks,
When using the IdentifyObjectsManually module I get the following error: (screenshot shown below)
“Error while processing IdentifyObjectsManually:
numpy boolean subtract, the ‘-’ operator is not supported, use the bitwise_xor the ‘^’ operator or the logical_xor function instead”

Does anyone here know what could be happening and how to fix it?

Could you post the pipeline and a sample image? I suspect it’s to do with the TYPE of image you’re trying to feed into IdentifyObjectsManually (does it work if you set it to just, say, an input image?), but the pipeline would be nice to try to confirm. Thanks!

Thanks Beth for your reply. Here is the cpproj file with a couple of images. If you step through it with these two images loaded you should encounter the same error at the manual step.
Ratio_Image_Proc_v2.cpproj (547.4 KB) 4321-488-4.tif (2.8 MB) mab22-568-4.tif (2.8 MB)

Hi Ashok,

Thanks for uploading that! As I suspected from the error message, IdentifyObjectsManually is not behaving nicely on binary images specifically- that’s a bug and I’ll file it for fixing.

In the meantime, you can use the “Color” mode in ConvertObjectsToImage to create the image to be fed to IDManually.

You could also possibly simplify your workflow somewhat- it seems like you have about half-a-dozen modules, including the IDManually module, that are used to be able to throw out stuff in the edge and revise outlines slightly (I can tell that from your module notes- great work on those!); you could do both in fewer steps if you fed your initial segmentation into EditObjectsManually instead. It’s possible I’m not grasping something though, in which case ignore that advice. :wink:

Thanks Beth.
I do agree I could simplify my workflow. Any suggestions are welcome!
Here is what I am trying to two. I have two labels on the same cells (hence two greyscale images) and I want the relative fluorescence of each label in the different regions of the cell. However to segment the cell I want to use only one of the two images that should also be better for segmentation, and I want to use that segmentation to demarcate the cell object in both images. Furthermore the images often have portions of other cells that I don’t want to use. The manual step is just to get rid of those. Let me know if you can suggest any simplifications. Thanks.

Hi @AshokPrasad,

Here are a few ideas for your pipeline.

CellProfiler can definitely make these types measurements.

I often take this approach when working with image sets. I’ll use whatever channels are optimal to create “cell” objects but then I’ll measure multiple channels within those cells. For example, I might use the nuclear stain and a cytoplasmic stain to create an object that demarcates where the cells are, but then I’ll make measurements for other channels within those cells.

In your pipeline, for example, if the MAB22FinalCell object accurately captures the cell objects that you’re interested in, you don’t need to create a FOUR321FinalCell object in order to measure intensity for the Four321 channel within the cells. You can use a single MeasureObjectIntensity module to measure both channels within the MAB22FinalCell objects.

It’s possible that you could create a workflow to detect these regions automatically, if there is a way to distinguish the unwanted regions from cell objects (such as by size, intensity, etc.).

In general, this tutorial may be useful for you: I2K 2020 tutorial: Introduction to Image Analysis and Machine Learning With CellProfiler and Cell... - YouTube. The biological problem is different, but the principles of how to build a pipeline and create and measure objects are the same.

Good luck!
Pearl