Skip to content
Snippets Groups Projects
Select Git revision
  • master default
  • object
  • develop protected
  • private_algos
  • cuisine
  • SMOTE
  • revert-76c4cca5
  • archive protected
  • no_graphviz
  • 0.0.1
10 results

summit

  • Clone with SSH
  • Clone with HTTPS
  • user avatar
    bbauvin authored
    890bc288
    History

    Mono- and Multi-view classification benchmark

    This project aims to be an easy-to use solution to run a prior benchmark on a dataset abd evaluate mono- and multi-view algorithms capacity to classify it correctly.

    Getting Started

    In order to run it you'll need to try on simulated data with the command

    python multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/ExecClassif.py -log

    Results will be stored in multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/Results/

    Prerequisites

    To be able to use this project, you'll need :

    And the following python modules :

    • pyscm - Set Covering Machine, Marchand, M., & Taylor, J. S. (2003) by A.Drouin, F.Brochu, G.Letarte St-Pierre, M.Osseni, P-L.Plante
    • numpy, scipy
    • matplotlib - Used to plot results
    • sklearn - Used for the monoview classifiers
    • joblib - Used to compute on multiple threads
    • h5py - Used to generate HDF5 datasets on hard drive and use them to sapre RAM
    • (argparse - Used to parse the input args)
    • (logging - Used to generate log)

    They are all tested in multiview-machine-mearning-omis/Code/MonoMutliViewClassifiers/Versions.py which is automatically checked each time you run the ExecClassif script

    Installing

    No installation is needed, just the prerequisites.

    Running the tests

    In order to run it you'll need to try on simulated data with the command

    python multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/ExecClassif.py -log

    Results will be stored in multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/Results/

    Authors

    • Baptiste BAUVIN