Skip to content
Snippets Groups Projects
Select Git revision
  • master default protected
  • ci39
  • ci39-python12
  • py39
  • issue#14
  • endianness
  • bugs_i686
  • bug_test_instfreqplot_arm64
  • bug_test_tfplot
  • gitlab-ci
  • debian
  • v1.0.17
  • v1.0.16
  • v1.0.15
  • v1.0.14
  • v1.0.13
  • v1.0.12
  • v1.0.9
  • v1.0.8
  • v1.0.7
  • v1.0.6
  • v1.0.0
22 results

ltfatpy

  • Clone with SSH
  • Clone with HTTPS
  • License: GPL v3 Build Status

    Mono- and Multi-view classification benchmark

    This project aims to be an easy-to use solution to run a prior benchmark on a dataset and evaluate mono- & multi-view algorithms capacity to classify it correctly.

    Getting Started

    Prerequisites

    To be able to use this project, you'll need :

    And the following python modules :

    • pyscm - Set Covering Machine, Marchand, M., & Taylor, J. S. (2003) by A.Drouin, F.Brochu, G.Letarte St-Pierre, M.Osseni, P-L.Plante
    • numpy, scipy
    • matplotlib - Used to plot results
    • sklearn - Used for the monoview classifiers
    • joblib - Used to compute on multiple threads
    • h5py - Used to generate HDF5 datasets on hard drive and use them to spare RAM
    • pickle - Used to store some results
    • graphviz - Used for decision tree interpretation

    They are all tested in multiview-machine-mearning-omis/Code/MonoMutliViewClassifiers/Versions.py which is automatically checked each time you run the Exec script

    Installing

    No installation is needed, just the prerequisites.

    Running on simulated data

    In order to run it you'll need to try on simulated data with the command

    cd multiview-machine-learning-omis/Code
    python Exec.py -log

    Results will be stored in multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/Results/

    If no path is specified, hdf5 datasets are stored in multiview-machine-learning-omis/Data

    Discovering the arguments

    In order to see all the arguments of this script and their decription and default values run :

    cd multiview-machine-learning-omis/Code
    python Exec.py -h

    Understanding Results/ architecture

    Results are stored in multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/Results/ A directory will be created with the name of the database used to run the script. For each time the script is run, a new directory named after the running date and time will be created. In that directory:

    • If the script is run using more than one statistic iteration (one for each seed), it will create one directory for each iteration and store the statistical analysis in the current directory
    • If it is run with one iteration, the iteration results will be stored in the current directory

    The results for each iteration are graphs recaping the classifiers scores and the classifiers config and results are stored in a directory of their own. To explore the results run the Exec script and go in multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/Results/Plausible/

    Running the tests

    /!\ still in development, test sucess is not meaningful ATM /!\

    In order to run it you'll need to try on simulated data with the command

    cd multiview-machine-learning-omis/
    python -m unittest discover

    Author

    • Baptiste BAUVIN

    Contributors

    • Mazid Osseni
    • Alexandre Drouin
    • Nikolas Huelsmann