I like your way of expanding the objects and adding the pixels intensity value. If I could expand the objects and colour them based on their single pixel intensity that would be ideal, but I don’t think the colormap options in ConvertObjectsToImage can do this for me - colourmaps are generally randomly assigned, or based on object number…
A faster way to do this might be to
i) detect nuclei and shrink to a point
ii) expand these points to the desired measurement radius
iii) measure the number of tertiary objects within the measurement radius for each expanded object (using the original points as the tertiary objects)
However this would also make it tricky to create the heatmap style images I’m so keen on, but it would allow measurement of nuclei within a given radius.
I actually managed to do the measurements with the pipeline I uploaded before, however it took ~20 hours to process 96 images, shrinking the nuclei then detecting cells within a radius of 50 px.
Here is the original image montage;
And here is my ‘heatmap’ which I’m very happy with for a starter
There are still issues to be ironed out such as the fact that the colour scale in each image is different, so looking at the montage of the whole well the scale does not match. I could of course rescale my images in matlab later so that’s perhaps not too much of a problem.
Objects at the edge of images also only register half of the local cells, obviously, so I get an obvious artefact from processing the images as batches (although there’s certainly no way I could process a huge image of the full well). Taking overlapping images then combining cropped versions may be my best course of action in ironing these out.
As you can see, I had to just expand my centroid objects into circles, but that’s not a huge problem as showing the actual nucleus shape isn’t necessary at this resolution. I really like how it’s looking, but the processing time is too excessive using this method (especially with such high resolution source images) since I’ll ultimately process over 1,000 images and would like to provide the pipeline to a masters student to work with on their own.
I thought that scaling the image down to a more workable resolution would be faster, but I’m struggling to carry my previously detected objects through to the new image - it seems like I’ll lose the least data if I detect the objects at high resolution, then do the detection of fixed radius nearest neighbours on a lower resolution image. If, for example, I do the following (pipeline attached):
Shrink my detected nuclei to a point
Expand these points to circles with radius 4
Convert these new objects to images
Resize the image to reduce resolution, should basically create new points, as long as no objects overlap
I get the following error when I try to detect these new smaller objects;
Traceback (most recent call last):
File “cellprofiler\gui\pipelinecontroller.pyc”, line 2482, in do_step
File “cellprofiler\modules\identifyprimaryobjects.pyc”, line 874, in run
File “cellprofiler\modules\identify.pyc”, line 737, in threshold_image
File “cellprofiler\cpimage.pyc”, line 401, in getattr
File “cellprofiler\cpimage.pyc”, line 214, in get_mask
File “cellprofiler\cpimage.pyc”, line 288, in crop_image_similarly
RuntimeError: Images are of different size and no crop mask available.
Use the Crop and Align modules to match images of different sizes.
Do you know if this is a bug? I suppose this is a slightly cumbersome way of doing this, as I could just rescale the x and y coordinates of my nuclei using the math module, then expand them on a smaller image?
Thanks for the help,
Seed.002.cpproj (282 KB)