CellProfiler Analyst style image window in Fiji for interactive image classifier

Dear Fiji developers,

I am helping some local cell biologists to analyze their multi-channel time-lapse image stacks of Fucci cells. They specifically interested in detecting cell-cycle event based on fluorescent intensity information in Fucci cell line. In order to do that I have been working on a plugin that based on Trackmate and Weka (not TWS though). The idea is simple, I detect cells using Trackmate spot detector, generate object features in each fluorescent channel of detected spot. Label some spots to be positive and some to be negative (in RoiManager right now). Then I put all these features into Weka to train an object classifier.

The prototype I quickly created with Groovy scripts works really good, the cell detection and object classification are fast, and accurate. It takes only 10 positive and 10 negative samples to get a good object classifier, and can classify a whole dataset contains 30k spots in 10 minutes. Therefore it seems I can wrap up this part into a Fiji plugin now. However I’m lost right now how to design a nice GUI for user to train an object classifier, on the run.

I think CellProfiler Analyst has a really nice and easy-to-use image classifier UI that simply fetch image sample on demand, and allow user to drag and drop into different classes. I would like to reproduce this or use that as a model, but in ImageJ/Fiji. So User can label for example 5 positive and 5 negative samples to train a initial classifier. Based on the training result, label/correct a few other samples, and re-train/update the classifier. repeating this process until a acceptable result, and then process the whole dataset. I think this is a nice idea but struggles for a while now to find a good way to implement it in Fiji.

I would rather keep it simple, so not aim to create a whole new set of GUI component like TrackMate or TrakEM. The ij.plugin.MontageMaker seems to be almost fulfils what I want, but there’s not much documentation and it is lacking interactive options (i.e.: It does not support drag-and-drop, and it need quite some extra code to enable dynamic updating content etc). What I can think of based on MontageMaker is to generate a image stack that combine all positive samples and populate the montage window with it. But apparently it means all the training samples need to be decided before hand, which is not ideal for interactive labeling for the User.

Could someone shed some light on this? Like if there’s an easy way to enable drag-and-drop of ImageJ image (window) onto another ImageJ UI component (canvas or window)? Or simply a image canvas or window that can contain multiple sub-images (can be snapshot like in autothreshold try-all, or TrackMate trackscheme viewer), but can be dynamically updated.

Any help or suggestions will be much appreciated.

Ziqiang Huang
Image Analyst @ CRUK-CI, UK

Again, answering my own question.

For people may be also interested, I found a sample plugin that enables drag and drop to plugin frame:
Unzipping the jar file will reveal a java source file that contains all the source code.

This together with the MontageMaker will be sufficient to create what I want, with a little more elaboration and design of course. An interesting function to develop/explore will be to drag an active image into a plugin frame instead of a file on disk, or drag sub-component around inside the plugin frame (like CPA). However I will not spend too much time on that right now but to focus on creating a working solution for the user first. If that works, I might come back and keep digging into this. If I later manage to get something generic enough I will share it.


I suspect none of its components will be useful to this project, but just in case it is helpful, a web-based replacement for CellProfiler Analyst (Carpenter lab) and Advanced Cell Classifier (Horvath lab) is coming soon and will be deep-learning powered. You can watch its repo here (still alpha stage): github.com/cytoai/cyto