Skip to content
Snippets Groups Projects
Commit e0e40b06 authored by Baptiste Bauvin's avatar Baptiste Bauvin
Browse files

Working on adding a setup.py and a sphinx documentation

parent beb946b2
No related branches found
No related tags found
No related merge requests found
Showing
with 27 additions and 24 deletions
...@@ -4,6 +4,6 @@ TODO ...@@ -4,6 +4,6 @@ TODO
.ipynb_checkpoints/** .ipynb_checkpoints/**
Results/** Results/**
Data/** Data/**
Code/MonoMultiviewClassifiers/Results/* multiview_platform/MonoMultiviewClassifiers/Results/*
Code/Tests/temp_tests/** multiview_platform/Tests/temp_tests/**
multiview-machine-learning-omis.iml multiview-machine-learning-omis.iml
\ No newline at end of file
from . import MonoMultiViewClassifiers, Tests, Exec
# import pdb;pdb.set_trace()
\ No newline at end of file
include *.md
\ No newline at end of file
...@@ -23,7 +23,7 @@ And the following python modules : ...@@ -23,7 +23,7 @@ And the following python modules :
* ([graphviz](https://pypi.python.org/pypi/graphviz) - Used for decision tree interpretation) * ([graphviz](https://pypi.python.org/pypi/graphviz) - Used for decision tree interpretation)
They are all tested in `multiview-machine-mearning-omis/Code/MonoMutliViewClassifiers/Versions.py` which is automatically checked each time you run the `Exec` script They are all tested in `multiview-machine-mearning-omis/multiview_platform/MonoMutliViewClassifiers/Versions.py` which is automatically checked each time you run the `Exec` script
### Installing ### Installing
...@@ -33,14 +33,14 @@ No installation is needed, just the prerequisites. ...@@ -33,14 +33,14 @@ No installation is needed, just the prerequisites.
In order to run it you'll need to try on **simulated** data with the command In order to run it you'll need to try on **simulated** data with the command
``` ```
cd multiview-machine-learning-omis/Code cd multiview-machine-learning-omis/multiview_platform
python Exec.py -log python Exec.py -log
``` ```
Results will be stored in `multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/Results/` Results will be stored in `multiview-machine-learning-omis/multiview_platform/MonoMultiViewClassifiers/Results/`
If you want to run a multiclass (one versus one) benchmark on simulated data, use : If you want to run a multiclass (one versus one) benchmark on simulated data, use :
``` ```
cd multiview-machine-learning-omis/Code cd multiview-machine-learning-omis/multiview_platform
python Exec.py -log --CL_nbClass 3 python Exec.py -log --CL_nbClass 3
``` ```
...@@ -51,14 +51,14 @@ If no path is specified, simulated hdf5 datasets are stored in `multiview-machin ...@@ -51,14 +51,14 @@ If no path is specified, simulated hdf5 datasets are stored in `multiview-machin
In order to see all the arguments of this script, their description and default values run : In order to see all the arguments of this script, their description and default values run :
``` ```
cd multiview-machine-learning-omis/Code cd multiview-machine-learning-omis/multiview_platform
python Exec.py -h python Exec.py -h
``` ```
### Understanding `Results/` architecture ### Understanding `Results/` architecture
Results are stored in `multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/Results/` Results are stored in `multiview-machine-learning-omis/multiview_platform/MonoMultiViewClassifiers/Results/`
A directory will be created with the name of the database used to run the script. A directory will be created with the name of the database used to run the script.
For each time the script is run, a new directory named after the running date and time will be created. For each time the script is run, a new directory named after the running date and time will be created.
In that directory: In that directory:
...@@ -66,7 +66,7 @@ In that directory: ...@@ -66,7 +66,7 @@ In that directory:
* If it is run with one iteration, the iteration results will be stored in the current directory * If it is run with one iteration, the iteration results will be stored in the current directory
The results for each iteration are graphs plotting the classifiers scores and the classifiers config and results are stored in a directory of their own. The results for each iteration are graphs plotting the classifiers scores and the classifiers config and results are stored in a directory of their own.
To explore the results run the `Exec` script and go in `multiview-machine-learning-omis/Code/MonoMultiViewClassifiers/Results/Plausible/` To explore the results run the `Exec` script and go in `multiview-machine-learning-omis/multiview_platform/MonoMultiViewClassifiers/Results/Plausible/`
### Dataset compatibility ### Dataset compatibility
...@@ -98,7 +98,7 @@ One group for the additional data called `Metadata` containing at least 3 attrib ...@@ -98,7 +98,7 @@ One group for the additional data called `Metadata` containing at least 3 attrib
In order to run the script on your dataset you need to use : In order to run the script on your dataset you need to use :
``` ```
cd multiview-machine-learning-omis/Code cd multiview-machine-learning-omis/multiview_platform
python Exec.py -log --name <your_dataset_name> --type <.cvs_or_.hdf5> --pathF <path_to_your_dataset> python Exec.py -log --name <your_dataset_name> --type <.cvs_or_.hdf5> --pathF <path_to_your_dataset>
``` ```
This will run a full benchmark on your dataset using all available views and labels. This will run a full benchmark on your dataset using all available views and labels.
......
if __name__=="__main__":
def Exec():
import Versions import Versions
Versions.testVersions() Versions.testVersions()
import sys import sys
...@@ -7,3 +8,5 @@ if __name__=="__main__": ...@@ -7,3 +8,5 @@ if __name__=="__main__":
ExecClassif.execClassif(sys.argv[1:]) ExecClassif.execClassif(sys.argv[1:])
if __name__=="__main__":
Exec()
\ No newline at end of file
import os __version__ = "0.0.0.0"
modules = []
for module in os.listdir(os.path.dirname(os.path.realpath(__file__))):
if module == '__init__.py' or module[-3:] != '.py':
continue
__import__(module[:-3], locals(), globals(), [], 1)
pass
del module
del os
""" """
To be able to add another metric to the benchmark you must : To be able to add another metric to the benchmark you must :
...@@ -31,3 +22,13 @@ Define a getConfig function ...@@ -31,3 +22,13 @@ Define a getConfig function
configString : A string that gives the name of the metric and explains how it is configured. Must end by configString : A string that gives the name of the metric and explains how it is configured. Must end by
(lower is better) or (higher is better) to be able to analyze the preds (lower is better) or (higher is better) to be able to analyze the preds
""" """
import os
modules = []
for module in os.listdir(os.path.dirname(os.path.realpath(__file__))):
if module in ['__init__.py', 'framework.py'] or module[-3:] != '.py':
continue
__import__(module[:-3], locals(), globals(), [], 1)
pass
del module
del os
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment