Skip to content
Snippets Groups Projects
Commit a6c709f3 authored by Baptiste Bauvin's avatar Baptiste Bauvin
Browse files

Updated the readme

parent d7b1104d
No related branches found
No related tags found
No related merge requests found
...@@ -45,47 +45,25 @@ In order to run it you'll need to try on **simulated** data with the command ...@@ -45,47 +45,25 @@ In order to run it you'll need to try on **simulated** data with the command
from multiview_platform.execute import execute from multiview_platform.execute import execute
execute() execute()
``` ```
Results will be stored in the results directory of the installation path : `path/to/install/multiview-machine-learning-omis/results`. This will run the first example. For more information about the examples, see the documentation
Results will be stored in the results directory of the installation path :
And simulated hdf5 datasets are stored in `path/to/install/multiview-machine-learning-omis/data` `path/to/install/multiview-machine-learning-omis/multiview_platform/examples/results`.
The documentations proposes a detailed interpretation of the results.
### Discovering the arguments ### Discovering the arguments
In order to see all the arguments of this script, their description and default values run : All the arguments of the platform are stored in a YAML config file. Some config files are given as examples.
``` The file stored in `multiview-machine-learning-omis/config_files/config.yml` is documented and it is highly recommended
cd multiview-machine-learning-omis/multiview_platform to read it carefully before playing around with the parameters.
python execute.py -h
```
The arguments can be passed through a file using `python Exec.py @<path_to_doc>`
The file must be formatted with one newline instead of each space :
Command line arguments `-debug --CL_type Monoview --CL_algos_monoview Adaboost SVM` will be formatted
```
-debug
--CL_type
Monoview
--CL_algos_monoview
Adaboost
SVM
```
Moreover, for Monoview algorithms (Multiview is still WIP), it is possible to pass multiple arguments instead of just one. You can create your own configuration file. In order to run the platform with it, run :
Thus, executing `python execute.py --RF_trees 10 100 --RF_max_depth 3 4 --RF_criterion entropy` will result in the generation of several classifiers called ```python
`RandomForest_10_3_entropy`, with 10 trees and a max depth of 3, `RandomForest_10_4_entropy`, with 10 tress and a max depth of 4, `RandomForest_100_3_entropy`, `RandomForest_100_4_entropy` to test all the passed arguments combinations. from multiview_platform.execute import execute
execute(config_path="/absolute/path/to/your/config/file")
```
### Understanding `results/` architecture
Results are stored in `multiview-machine-learning-omis/multiview_platform/mono_multi_view_classifiers/results/` For further information about classifier-specific arguments, see the documentation.
A directory will be created with the name of the database used to run the script.
For each time the script is run, a new directory named after the running date and time will be created.
In that directory:
* If the script is run using more than one statistic iteration (one for each seed), it will create one directory for each iteration and store the statistical analysis in the current directory
* If it is run with one iteration, the iteration results will be stored in the current directory
The results for each iteration are graphs plotting the classifiers scores and the classifiers config and results are stored in a directory of their own.
To explore the results run the `execute` script and go in `multiview-machine-learning-omis/multiview_platform/mono_multi_view_classifiers/results/plausible/`
### Dataset compatibility ### Dataset compatibility
......
...@@ -10,7 +10,7 @@ Base : ...@@ -10,7 +10,7 @@ Base :
type: ".hdf5" type: ".hdf5"
# The views to use in the banchmark, an empty value will result in using all the views # The views to use in the banchmark, an empty value will result in using all the views
views: views:
# The path to the directory where the datasets are stored # The path to the directory where the datasets are stored, an absolute path is advised
pathf: "examples/data/example_1/" pathf: "examples/data/example_1/"
# The niceness of the processes, useful to lower their priority # The niceness of the processes, useful to lower their priority
nice: 0 nice: 0
...@@ -25,7 +25,7 @@ Base : ...@@ -25,7 +25,7 @@ Base :
# To add noise to the data, will add gaussian noise with noise_std # To add noise to the data, will add gaussian noise with noise_std
add_noise: False add_noise: False
noise_std: 0.0 noise_std: 0.0
# The directory in which the results will be stored # The directory in which the results will be stored, an absolute path is advised
res_dir: "examples/results/example_1/" res_dir: "examples/results/example_1/"
# All the classification-realted configuration options # All the classification-realted configuration options
...@@ -89,26 +89,11 @@ adaboost: ...@@ -89,26 +89,11 @@ adaboost:
n_estimators: [50] n_estimators: [50]
base_estimator: ["DecisionTreeClassifier"] base_estimator: ["DecisionTreeClassifier"]
adaboost_pregen:
n_estimators: [50]
base_estimator: ["DecisionTreeClassifier"]
n_stumps: [1]
adaboost_graalpy:
n_iterations: [50]
n_stumps: [1]
decision_tree: decision_tree:
max_depth: [10] max_depth: [10]
criterion: ["gini"] criterion: ["gini"]
splitter: ["best"] splitter: ["best"]
decision_tree_pregen:
max_depth: [10]
criterion: ["gini"]
splitter: ["best"]
n_stumps: [1]
sgd: sgd:
loss: ["hinge"] loss: ["hinge"]
penalty: [l2] penalty: [l2]
...@@ -119,31 +104,6 @@ knn: ...@@ -119,31 +104,6 @@ knn:
weights: ["uniform"] weights: ["uniform"]
algorithm: ["auto"] algorithm: ["auto"]
scm:
model_type: ["conjunction"]
max_rules: [10]
p: [0.1]
scm_pregen:
model_type: ["conjunction"]
max_rules: [10]
p: [0.1]
n_stumps: [1]
cq_boost:
mu: [0.01]
epsilon: [1e-06]
n_max_iterations: [5]
n_stumps: [1]
cg_desc:
n_max_iterations: [10]
n_stumps: [1]
cb_boost:
n_max_iterations: [10]
n_stumps: [1]
lasso: lasso:
alpha: [1] alpha: [1]
max_iter: [2] max_iter: [2]
...@@ -198,17 +158,6 @@ difficulty_fusion: ...@@ -198,17 +158,6 @@ difficulty_fusion:
criterion: ["gini"] criterion: ["gini"]
splitter: ["best"] splitter: ["best"]
scm_late_fusion:
classifier_names: [["decision_tree"]]
p: 0.1
max_rules: 10
model_type: 'conjunction'
classifier_configs:
decision_tree:
max_depth: [1]
criterion: ["gini"]
splitter: ["best"]
majority_voting_fusion: majority_voting_fusion:
classifier_names: [["decision_tree", "decision_tree", "decision_tree", ]] classifier_names: [["decision_tree", "decision_tree", "decision_tree", ]]
classifier_configs: classifier_configs:
...@@ -232,8 +181,3 @@ weighted_linear_late_fusion: ...@@ -232,8 +181,3 @@ weighted_linear_late_fusion:
max_depth: [1] max_depth: [1]
criterion: ["gini"] criterion: ["gini"]
splitter: ["best"] splitter: ["best"]
\ No newline at end of file
mumbo:
base_estimator: [null]
n_estimators: [10]
best_view_mode: ["edge"]
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment