Skip to content
Snippets Groups Projects
Select Git revision
  • 2ebf30b6981b7b5d0a0880e1abb69898efa1bf07
  • master default protected
  • loss
  • producer
4 results

include

License: GPL v3 Build Status

Mono- and Multi-view classification benchmark

This project aims to be an easy-to-use solution to run a prior benchmark on a dataset and evaluate mono- & multi-view algorithms capacity to classify it correctly.

Getting Started

Prerequisites

To be able to use this project, you'll need :

And the following python modules :

  • pyscm - Set Covering Machine, Marchand, M., & Taylor, J. S. (2003) by A.Drouin, F.Brochu, G.Letarte St-Pierre, M.Osseni, P-L.Plante
  • numpy, scipy
  • matplotlib - Used to plot results
  • sklearn - Used for the monoview classifiers
  • joblib - Used to compute on multiple threads
  • h5py - Used to generate HDF5 datasets on hard drive and use them to spare RAM
  • pickle - Used to store some results
  • graphviz - Used for decision tree interpretation

They are all tested in multiview-machine-mearning-omis/Code/MonoMutliViewClassifiers/Versions.py which is automatically checked each time you run the Exec script

Installing

No installation is needed, just the prerequisites.

Running on simulated data

In order to run it you'll need to try on simulated data with the command

cd multiview-machine-learning-omis/Code
python Exec.py -log

Results will be stored in multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/Results/

If no path is specified, hdf5 datasets are stored in multiview-machine-learning-omis/Data

Discovering the arguments

In order to see all the arguments of this script and their decription and default values run :

cd multiview-machine-learning-omis/Code
python Exec.py -h

Understanding Results/ architecture

Results are stored in multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/Results/ A directory will be created with the name of the database used to run the script. For each time the script is run, a new directory named after the running date and time will be created. In that directory:

  • If the script is run using more than one statistic iteration (one for each seed), it will create one directory for each iteration and store the statistical analysis in the current directory
  • If it is run with one iteration, the iteration results will be stored in the current directory

The results for each iteration are graphs recaping the classifiers scores and the classifiers config and results are stored in a directory of their own. To explore the results run the Exec script and go in multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/Results/Plausible/

Running the tests

/!\ still in development, test sucess is not meaningful ATM /!\

In order to run it you'll need to try on simulated data with the command

cd multiview-machine-learning-omis/
python -m unittest discover

Author

  • Baptiste BAUVIN

Contributors

  • Mazid Osseni
  • Alexandre Drouin
  • Nikolas Huelsmann