This is a big one, which requires the set-up of a database back-end (done already) and a (web) front-end: - [x] Dump serialized config files to JSON database. - [x] Represent the performance in a way that general metrics can be shown. - [x] project - [x] name - [x] training_set (i.a.) - [x] testing_set (i.a) -- this and above probably need to be abstracted from loaders - [x] string representation of the used features - [x] string NAME of the classifier used - [x] ~~POS / NEG f1-scores (could be put in a graph)~~ micro f-1 - [ ] Able to overview and compare experiments visually. - [x] Flat Performance bar. - [x] Plotting performance on data proportions. - [ ] Summary of experiment configurations. - [x] Confusion matrices. - [ ] Aggregate performances in one report. - [ ] t-SNE? - [ ] Insight into feature importances. - [x] LIME evaluation. - [ ] Coof. representations. <!--- @huboard:{"order":5.5,"milestone_order":1.75,"custom_state":""} -->
This is a big one, which requires the set-up of a database back-end (done already) and a (web) front-end:
POS / NEG f1-scores (could be put in a graph)micro f-1