Confidence of fit in Classifier?

cellprofiler-analyst

#1

It would be nice to know how much of the training set the Classifier recognizes with the rules it derives from them – it’d help give a better sense of whether there’s any hope :wink: Is there a way to do this now?

thanks,
tim


#2

We have a cross validation function, but it still has some bugs that need to be squashed before we can release it. A rewrite of CPA is also in the works (which will include x-validate as well), so enhancements to the current version are somewhat infrequent. Unfortunately, for now confidence must be inferred subjectively through interactions with the classifier.

Best,
Adam


#3

Okay, cool. Not to be totally obnoxious, but do you have an order-of-magnitude sense of when the rewritten version might be ready (weeks, months, years)? I’ve just started using CPA but it looks like a really useful analysis and validation tool.

thanks,
tim


#4

Tell ya what, you’ve inspired me to revisit this code and see if I can get it back up and running. As I recall, the problem was that it didn’t run properly on Windows, though it did work on our Macs, so we just got rid of the feature for consistency.

I’ll get back to you later today (how’s that for an order of magnitude), and let you know how things go.


#5

Hey, nice! I’ll look forward to it. :slight_smile:


#6

Alright, looks like we’ve finally got it taken care of. It’s still yet to be re-tested under mac os, but I’ll likely get a chance to do this tomorrow. I should be able to get you a spankin’ new .jar by the end of the week.

Cheers
-Adam


#7

Tim, I’ve jarred the new CPA revision with cross-validation functionality. If you’d like me to send you a copy of the jar, please drop me an email at afraser (at) broad . mit . edu

cheers!