Dear Fiji developers,
I am helping some local cell biologists to analyze their multi-channel time-lapse image stacks of Fucci cells. They specifically interested in detecting cell-cycle event based on fluorescent intensity information in Fucci cell line. In order to do that I have been working on a plugin that based on Trackmate and Weka (not TWS though). The idea is simple, I detect cells using Trackmate spot detector, generate object features in each fluorescent channel of detected spot. Label some spots to be positive and some to be negative (in RoiManager right now). Then I put all these features into Weka to train an object classifier.
The prototype I quickly created with Groovy scripts works really good, the cell detection and object classification are fast, and accurate. It takes only 10 positive and 10 negative samples to get a good object classifier, and can classify a whole dataset contains 30k spots in 10 minutes. Therefore it seems I can wrap up this part into a Fiji plugin now. However I’m lost right now how to design a nice GUI for user to train an object classifier, on the run.
I think CellProfiler Analyst has a really nice and easy-to-use image classifier UI that simply fetch image sample on demand, and allow user to drag and drop into different classes. I would like to reproduce this or use that as a model, but in ImageJ/Fiji. So User can label for example 5 positive and 5 negative samples to train a initial classifier. Based on the training result, label/correct a few other samples, and re-train/update the classifier. repeating this process until a acceptable result, and then process the whole dataset. I think this is a nice idea but struggles for a while now to find a good way to implement it in Fiji.
I would rather keep it simple, so not aim to create a whole new set of GUI component like TrackMate or TrakEM. The ij.plugin.MontageMaker seems to be almost fulfils what I want, but there’s not much documentation and it is lacking interactive options (i.e.: It does not support drag-and-drop, and it need quite some extra code to enable dynamic updating content etc). What I can think of based on MontageMaker is to generate a image stack that combine all positive samples and populate the montage window with it. But apparently it means all the training samples need to be decided before hand, which is not ideal for interactive labeling for the User.
Could someone shed some light on this? Like if there’s an easy way to enable drag-and-drop of ImageJ image (window) onto another ImageJ UI component (canvas or window)? Or simply a image canvas or window that can contain multiple sub-images (can be snapshot like in autothreshold try-all, or TrackMate trackscheme viewer), but can be dynamically updated.
Any help or suggestions will be much appreciated.
Image Analyst @ CRUK-CI, UK