Expand size of image for sorting/training unclassified cells

Hi CPA help,
We are working on a project where we’d like to use the CPA Classifer interface to label cells. For some of our cell types, the cellular context is important for determining its classification. Is it possible to expand the size of each image viewed in the Classifer so that we can see more of the surrounding image, prior to deciding which bin to place the object into?
Thanks!
–Andy

Hi Andy,

For this purpose, you can add a field in your properties file called image_tile_size (if it’s not already there):

image_tile_size = <tile size in pixels> This field specifies the crop size of the object tiles, that is, the pixel size of the square “window” that shows an individual object within CPA.

Regards,
-Mark

In addition, you ought to be able to double-click on any object tile (thumbnail) in the Classifier window and it should pop up the entire original image. The object in question will be marked with a small square in this new window. In fact, you can drag-and-drop objects directly from this full image to your choice bins. Click once and you’ll see the nearest object marked by the indicator square. You can select this object (or shift-click to select many) and drop them into your choice bin.

-David

Hi David and Mark,
Thanks so much for the help! These are both great solutions to my issue. The second option (viewing the whole image and dragging multiple selections simultaneously into a bin) works perfectly for this application.

Best!
–Andy

Great, glad that works for you. Note however, that we do caution users, when dragging and dropping from the whole image, not to oversample the training set from a few selected images. Fetching random objects from your whole image set will work against this sampling bias and generally give better convergence to the best classifier more quickly. But for rare phenotypes, especially when you have positive controls, we do realize that this method can be useful and almost necessary.

Best,
David

Hi David and Mark,

The tile size fix works well for us, but the whole-image solution causes CPA to give an error:

[code]An error occurred in the program:
IndexError: list index out of range

Traceback (most recent call last):
File “imagetile.pyc”, line 118, in OnDClick
File “imagetools.pyc”, line 55, in ShowImage
File “imageviewer.pyc”, line 214, in init
File “imageviewer.pyc”, line 249, in SetImage
File “imageviewer.pyc”, line 55, in init
File “imagepanel.pyc”, line 31, in init
File “imagetools.pyc”, line 111, in MergeToBitmap
File “imagetools.pyc”, line 155, in MergeChannels
[/code]

I’ve attached the pipeline that we used to generate the database. Could you point us to what we’re doing wrong, please?

Thank you very much for your help!

Pang Wei
gbmTaggerNuclei.cp (11.6 KB)

Hi Pang,

I suspect this is is due to the Single object table setting in ExportToDatabase, as discussed here:
http://forum.image.sc/t/cpa-on-multiple-classes-and-interpreting-saved-training-sets/13228/1

Did this get solved by exporting to multiple object tables?

Hrm, I have a suspicion, but I’d need to see your properties file…