Splitting a large image in a 'smart' way?

Hi all,

Excited to take 3.0 for a spin. I’ve been snapping some large images on our confocal across large areas.Here’s an example.

I don’t want to capture individual images with cells cut off at the edge, so I’m looking for a ‘smart’ way to chop this large and nicely stitched image up and process in CP? Is there anything out there to do a simple detection of cells and snip out an ROI around it? I don’t want to crop an arbitrary size, e.g. 2000x2000 and miss out on data from cells touching the edge?

Luckily most of my work is at this density - an automated analysis dream!


Can I ask why you want to crop it into smaller pieces? If CP can load it to split it into smaller chunks, it should be able to process it pretty fully IME. I’d just try it first.

But sure, you could definitely create ROIs based on cells; you could do a CellProfiler 3.0 pipeline that looked something like Resize (downsample for the sake of speed/finding big chunks)->Smooth->IdentifyPrimaryObjects (find your big chunks)->ResizeObjects (get them back to the full size)->SaveCroppedObjects. You’d then have a bunch of masks exported, each one corresponding to a chunk. You could then do a separate CP pipeline where you load each mask with its corresponding raw image (you’d want to do Metadata based matching in NamesAndTypes to make that work), use Crop to crop it down to just that area, then do whatever analyses you care about.