Skip to content
Snippets Groups Projects
Commit d42a7a41 authored by Baptiste Bauvin's avatar Baptiste Bauvin
Browse files

Merged private algos

parents 7c669cd7 02266792
Branches
No related tags found
No related merge requests found
Showing
with 438 additions and 381 deletions
...@@ -14,4 +14,10 @@ build* ...@@ -14,4 +14,10 @@ build*
dist* dist*
multiview_platform/.idea/* multiview_platform/.idea/*
.gitignore .gitignore
multiview_platform/examples/results* multiview_platform/examples/results/example_1/*
multiview_platform/examples/results/example_2_1_1/*
multiview_platform/examples/results/example_2_1_2/*
multiview_platform/examples/results/example_2_2_1/*
multiview_platform/examples/results/example_3/*
multiview_platform/examples/results/example_4/*
multiview_platform/examples/results/example_5/*
...@@ -13,6 +13,7 @@ doc: ...@@ -13,6 +13,7 @@ doc:
tags: tags:
- docker - docker
only: only:
- master
- develop - develop
script: script:
- export LC_ALL=$(locale -a | grep en_US) - export LC_ALL=$(locale -a | grep en_US)
...@@ -27,21 +28,22 @@ doc: ...@@ -27,21 +28,22 @@ doc:
paths: paths:
- public - public
# TODO: Replace the task doc by the following task pages when making the
# project public # project public
# pages:
#pages: image: registry.gitlab.lis-lab.fr:5005/baptiste.bauvin/multiview-machine-learning-omis/ubuntu:18.04
# image: registry.gitlab.lis-lab.fr:5005/baptiste.bauvin/multiview-machine-learning-omis/ubuntu:18.04 tags:
# tags: - docker
# - docker only:
# only: - master
# - master script:
# script: - export LC_ALL=$(locale -a | grep en_US)
# - export LC_ALL=$(locale -a | grep en_US) - export LANG=$(locale -a | grep en_US)
# - export LANG=$(locale -a | grep en_US) - pip3 install -e . --no-deps
# - python3 setup.py build_sphinx - sphinx-apidoc -o docs/source multiview_platform
# - cp -r build/sphinx/html public - cd docs/source
# artifacts: - sphinx-build -b html . ../build
# paths: - cd ../..
# - public - cp -r ./docs/build public
artifacts:
paths:
- public
language: python
python:
- 2.7
- 3.5
addons:
apt:
packages:
- libblas-dev
- liblapack-dev
- gfortran
install:
- pip install -U pip pip-tools
- pip install numpy scipy scikit-learn==0.19 matplotlib joblib argparse h5py
- cd .. && git clone https://github.com/aldro61/pyscm.git && cd pyscm/ && python setup.py install && cd ../multiview-machine-learning-omis
- pip install -e .
script:
- python -m unittest discover
#notifications:
# email:
# on_success: change
#on_failure: change
\ No newline at end of file
[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](http://www.gnu.org/licenses/gpl-3.0) [![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](http://www.gnu.org/licenses/gpl-3.0)
[![Build Status](https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/badges/develop/build.svg)](https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/badges/develop/build.svg) [![Build Status](https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/badges/develop/pipeline.svg)](https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/badges/develop/pipeline.svg)
# Mono- and Multi-view classification benchmark # Mono- and Multi-view classification benchmark
This project aims to be an easy-to-use solution to run a prior benchmark on a dataset and evaluate mono- & multi-view algorithms capacity to classify it correctly. This project aims to be an easy-to-use solution to run a prior benchmark on a dataset and evaluate mono- & multi-view algorithms capacity to classify it correctly.
...@@ -25,18 +25,18 @@ And the following python modules : ...@@ -25,18 +25,18 @@ And the following python modules :
* [m2r](https://pypi.org/project/m2r/) - Used to generate documentation from the readme, * [m2r](https://pypi.org/project/m2r/) - Used to generate documentation from the readme,
* [docutils](https://pypi.org/project/docutils/) - Used to generate documentation, * [docutils](https://pypi.org/project/docutils/) - Used to generate documentation,
* [pyyaml](https://pypi.org/project/PyYAML/) - Used to read the config files, * [pyyaml](https://pypi.org/project/PyYAML/) - Used to read the config files,
* [plotly](https://plot.ly/) - Used to generate interactive HTML visuals. * [plotly](https://plot.ly/) - Used to generate interactive HTML visuals,
* [tabulate](https://pypi.org/project/tabulate/) - Used to generated the confusion matrix.
They are all tested in `multiview-machine-mearning-omis/multiview_platform/MonoMutliViewClassifiers/Versions.py` which is automatically checked each time you run the `execute` script
### Installing ### Installing
Once you cloned the project from this repository, you just have to use : Once you cloned the project from the [gitlab repository](https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/), you just have to use :
``` ```
pip install -e . pip install -e .
``` ```
In the `multiview_machine-learning-omis` directory. In the `multiview_machine-learning-omis` directory to install SuMMIT and its dependencies.
### Running on simulated data ### Running on simulated data
...@@ -45,10 +45,10 @@ In order to run it you'll need to try on **simulated** data with the command ...@@ -45,10 +45,10 @@ In order to run it you'll need to try on **simulated** data with the command
from multiview_platform.execute import execute from multiview_platform.execute import execute
execute() execute()
``` ```
This will run the first example. For more information about the examples, see the documentation This will run the first example. For more information about the examples, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/)
Results will be stored in the results directory of the installation path : Results will be stored in the results directory of the installation path :
`path/to/install/multiview-machine-learning-omis/multiview_platform/examples/results`. `path/to/install/multiview-machine-learning-omis/multiview_platform/examples/results`.
The documentations proposes a detailed interpretation of the results. The documentation proposes a detailed interpretation of the results.
### Discovering the arguments ### Discovering the arguments
...@@ -62,59 +62,52 @@ from multiview_platform.execute import execute ...@@ -62,59 +62,52 @@ from multiview_platform.execute import execute
execute(config_path="/absolute/path/to/your/config/file") execute(config_path="/absolute/path/to/your/config/file")
``` ```
For further information about classifier-specific arguments, see the documentation. For further information about classifier-specific arguments, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/).
### Dataset compatibility ### Dataset compatibility
In order to start a benchmark on your dataset, you need to format it so the script can use it. In order to start a benchmark on your own dataset, you need to format it so SuMMIT can use it.
You can have either a directory containing `.csv` files or a HDF5 file.
##### If you have multiple `.csv` files, you must organize them as : [comment]: <> (You can have either a directory containing `.csv` files or a HDF5 file.)
[comment]: <> (#### If you have multiple `.csv` files, you must organize them as :
* `top_directory/database_name-labels.csv` * `top_directory/database_name-labels.csv`
* `top_directory/database_name-labels-names.csv` * `top_directory/database_name-labels-names.csv`
* `top_directory/Views/view_name.csv` or `top_directory/Views/view_name-s.csv` if the view is sparse * `top_directory/Views/view_name.csv` or `top_directory/Views/view_name-s.csv` if the view is sparse)
With `top_directory` being the last directory in the `pathF` argument [comment]: <> (With `top_directory` being the last directory in the `pathF` argument)
##### If you already have an HDF5 dataset file it must be formatted as : ##### If you already have an HDF5 dataset file it must be formatted as :
One dataset for each view called `ViewX` with `X` being the view index with 2 attribures : * One dataset for each view called `ViewI` with `I` being the view index with 2 attribures :
* `attrs["name"]` a string for the name of the view * `attrs["name"]` a string for the name of the view
* `attrs["sparse"]` a boolean specifying whether the view is sparse or not * `attrs["sparse"]` a boolean specifying whether the view is sparse or not (WIP)
* `attrs["ranges"]` a `np.array` containing the ranges of each attribute in the view (for ex. : for a pixel the range will be 255, for a real attribute in [-1,1], the range will be 2).
* `attrs["limits"]` a `np.array` containing all the limits of the attributes int he view. (for ex. : for a pixel the limits will be `[0, 255]`, for a real attribute in [-1,1], the limits will be `[-1,1]`).
One dataset for the labels called `Labels` with one attribute : * One dataset for the labels called `Labels` with one attribute :
* `attrs["names"]` a list of strings encoded in utf-8 namig the labels in the right order * `attrs["names"]` a list of strings encoded in utf-8 naming the labels in the right order
One group for the additional data called `Metadata` containing at least 3 attributes : * One group for the additional data called `Metadata` containing at least 1 dataset :
* `"example_ids"`, a numpy array of type `S100`, with the ids of the examples in the right order
* And three attributes :
* `attrs["nbView"]` an int counting the total number of views in the dataset * `attrs["nbView"]` an int counting the total number of views in the dataset
* `attrs["nbClass"]` an int counting the total number of different labels in the dataset * `attrs["nbClass"]` an int counting the total number of different labels in the dataset
* `attrs["datasetLength"]` an int counting the total number of examples in the dataset * `attrs["datasetLength"]` an int counting the total number of examples in the dataset
The `format_dataset.py` file is documented and can be used to format a multiview dataset in a SuMMIT-compatible HDF5 file.
### Running on your dataset ### Running on your dataset
In order to run the script on your dataset you need to use : Once you have formatted your dataset, to run SuMMIT on it you need to modify the config file as
``` ```yaml
cd multiview-machine-learning-omis/multiview_platform name: ["your_file_name"]
python execute.py -log --name <your_dataset_name> --type <.cvs_or_.hdf5> --pathF <path_to_your_dataset> *
pathf: "path/to/your/dataset"
``` ```
This will run a full benchmark on your dataset using all available views and labels. This will run a full benchmark on your dataset using all available views and labels.
You may configure the `--CL_statsiter`, `--CL_split`, `--CL_nbFolds`, `--CL_GS_iter` arguments to start a meaningful benchmark It is highly recommended to follow the documentation's [tutorials](http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/tutorials/index.html) to learn the use of each parameter.
## Running the tests
**/!\ still in development, test sucess is not meaningful ATM /!\\**
In order to run it you'll need to try on simulated data with the command
```
cd multiview-machine-learning-omis/
python -m unittest discover
```
## Author ## Author
...@@ -122,6 +115,5 @@ python -m unittest discover ...@@ -122,6 +115,5 @@ python -m unittest discover
### Contributors ### Contributors
* **Mazid Osseni** * **Dominique BENIELLI**
* **Alexandre Drouin** * **Alexis PROD'HOMME**
* **Nikolas Huelsmann** \ No newline at end of file
# The base configuration of the benchmark # The base configuration of the benchmark
Base :
# Enable logging # Enable logging
log: True log: True
# The name of each dataset in the directory on which the benchmark should be run # The name of each dataset in the directory on which the benchmark should be run
...@@ -7,7 +7,7 @@ Base : ...@@ -7,7 +7,7 @@ Base :
# A label for the resul directory # A label for the resul directory
label: "_" label: "_"
# The type of dataset, currently supported ".hdf5", and ".csv" # The type of dataset, currently supported ".hdf5", and ".csv"
type: ".hdf5" file_type: ".hdf5"
# The views to use in the banchmark, an empty value will result in using all the views # The views to use in the banchmark, an empty value will result in using all the views
views: views:
# The path to the directory where the datasets are stored # The path to the directory where the datasets are stored
...@@ -27,9 +27,11 @@ Base : ...@@ -27,9 +27,11 @@ Base :
noise_std: 0.0 noise_std: 0.0
# The directory in which the results will be stored # The directory in which the results will be stored
res_dir: "../results/" res_dir: "../results/"
# If an error occurs in a classifier, if track_tracebacks is set to True, the
# benchmark saves the traceback and continues, if it is set to False, it will
# stop the benchmark and raise the error
track_tracebacks: True
# All the classification-realted configuration options
Classification:
# If the dataset is multiclass, will use this multiclass-to-biclass method # If the dataset is multiclass, will use this multiclass-to-biclass method
multiclass_method: "oneVersusOne" multiclass_method: "oneVersusOne"
# The ratio number of test exmaples/number of train examples # The ratio number of test exmaples/number of train examples
...@@ -54,9 +56,13 @@ Classification: ...@@ -54,9 +56,13 @@ Classification:
# The metric that will be used in the hyper-parameter optimization process # The metric that will be used in the hyper-parameter optimization process
metric_princ: "f1_score" metric_princ: "f1_score"
# The type of hyper-parameter optimization method # The type of hyper-parameter optimization method
hps_type: "randomized_search" hps_type: "Random"
# The number of iteration in the hyper-parameter optimization process # The arguments of the hyper-parameter optimization method
hps_iter: 2 hps_args:
# The number of iteration of the optimization process
n_iter: 4
# If True, for multiview algoriithm, will use n_iter*n_views iterations to optimize.
equivalent_draws: True
# The following arguments are classifier-specific, and are documented in each # The following arguments are classifier-specific, and are documented in each
......
# The base configuration of the benchmark
log: True
name: ["digits"]
label: "_"
file_type: ".hdf5"
views:
pathf: "/home/baptiste/Documents/Datasets/Digits/"
nice: 0
random_state: 42
nb_cores: 1
full: False
debug: True
add_noise: False
noise_std: 0.0
res_dir: "../results/"
track_tracebacks: False
# All the classification-realted configuration options
multiclass_method: "oneVersusOne"
split: 0.75
nb_folds: 5
nb_class: 2
classes:
type: ["multiview",]
algos_monoview: ["group_scm",]
algos_multiview: ["group_scm"]
stats_iter: 2
metrics:
accuracy_score: {}
f1_score:
average: 'micro'
metric_princ: "accuracy_score"
hps_type: "None"
hps_args: {}
\ No newline at end of file
# The base configuration of the benchmark # The base configuration of the benchmark
Base :
log: True log: True
name: ["metrics"] name: ["digits"]
label: "_" label: "_"
type: ".hdf5" file_type: ".hdf5"
views: views:
pathf: "/home/baptiste/Documents/Datasets/Generated/metrics_dset/" pathf: "/home/baptiste/Documents/Datasets/Digits/"
nice: 0 nice: 0
random_state: 42 random_state: 42
nb_cores: 1 nb_cores: 1
...@@ -14,19 +13,19 @@ Base : ...@@ -14,19 +13,19 @@ Base :
add_noise: False add_noise: False
noise_std: 0.0 noise_std: 0.0
res_dir: "../results/" res_dir: "../results/"
track_tracebacks: False
# All the classification-realted configuration options # All the classification-realted configuration options
Classification:
multiclass_method: "oneVersusOne" multiclass_method: "oneVersusOne"
split: 0.5 split: 0.75
nb_folds: 5 nb_folds: 5
nb_class: 2 nb_class:
classes: classes:
type: ["multiview"] type: ["multiview",]
algos_monoview: ["random_forest"] algos_monoview: ["decision_tree", "random_forest"]
algos_multiview: ["mucombo"] algos_multiview: ["mumbo","mvml"]
stats_iter: 1 stats_iter: 2
metrics: ["accuracy_score", "f1_score"] metrics: ["accuracy_score", "f1_score"]
metric_princ: "f1_score" metric_princ: "accuracy_score"
hps_type: "randomized_search" hps_type: "randomized_search-equiv"
hps_iter: 1 hps_iter: 2
\ No newline at end of file \ No newline at end of file
# The base configuration of the benchmark # The base configuration of the benchmark
Base :
log: True log: True
name: ["plausible", "koukou"] name: ["digits",]
label: "_" label: "_"
type: ".hdf5" file_type: ".hdf5"
views: views:
pathf: "../data/" pathf: "/home/baptiste/Documents/Datasets/Digits/"
nice: 0 nice: 0
random_state: 42 random_state: 42
nb_cores: 1 nb_cores: 1
...@@ -14,212 +13,254 @@ Base : ...@@ -14,212 +13,254 @@ Base :
add_noise: False add_noise: False
noise_std: 0.0 noise_std: 0.0
res_dir: "../results/" res_dir: "../results/"
track_tracebacks: False
# All the classification-realted configuration options # All the classification-realted configuration options
Classification:
multiclass_method: "oneVersusOne" multiclass_method: "oneVersusOne"
split: 0.9 split: 0.8
nb_folds: 2 nb_folds: 2
nb_class: 2 nb_class: 2
classes: classes:
type: ["multiview", "monoview"] type: ["monoview", "multiview"]
algos_monoview: ["decision_tree", "adaboost", "random_forest" ] algos_monoview: ["decision_tree", "adaboost", ]
algos_multiview: ["weighted_linear_early_fusion",] algos_multiview: ["weighted_linear_late_fusion"]
stats_iter: 2 stats_iter: 3
metrics: ["accuracy_score", "f1_score"] metrics:
metric_princ: "f1_score" accuracy_score: {}
hps_type: "randomized_search-equiv" f1_score: {}
hps_iter: 5 metric_princ: "accuracy_score"
hps_type: "Random"
hps_args:
n_iter: 10
equivalent_draws: False
#####################################
# The Monoview Classifier arguments #
#####################################
random_forest:
n_estimators: [25]
max_depth: [3]
criterion: ["entropy"]
svm_linear:
C: [1]
svm_rbf:
C: [1]
svm_poly:
C: [1]
degree: [2]
adaboost:
n_estimators: [50]
base_estimator: ["DecisionTreeClassifier"]
adaboost_pregen:
n_estimators: [50]
base_estimator: ["DecisionTreeClassifier"]
n_stumps: [1]
adaboost_graalpy:
n_iterations: [50]
n_stumps: [1]
decision_tree:
max_depth: [2]
criterion: ["gini"]
splitter: ["best"]
decision_tree_pregen:
max_depth: [10]
criterion: ["gini"]
splitter: ["best"]
n_stumps: [1]
sgd:
loss: ["hinge"]
penalty: [l2]
alpha: [0.0001]
knn:
n_neighbors: [5]
weights: ["uniform"]
algorithm: ["auto"]
scm:
model_type: ["conjunction"]
max_rules: [10]
p: [0.1]
scm_pregen:
model_type: ["conjunction"]
max_rules: [10]
p: [0.1]
n_stumps: [1]
cq_boost:
mu: [0.01]
epsilon: [1e-06]
n_max_iterations: [5]
n_stumps: [1]
cg_desc:
n_max_iterations: [10]
n_stumps: [1]
cb_boost:
n_max_iterations: [10]
n_stumps: [1]
lasso:
alpha: [1]
max_iter: [2]
gradient_boosting:
n_estimators: [2]
######################################
# The Multiview Classifier arguments #
######################################
weighted_linear_early_fusion: weighted_linear_early_fusion:
view_weights: [null] view_weights: null
monoview_classifier_name: ["decision_tree"] monoview_classifier_name: "decision_tree"
monoview_classifier_config: monoview_classifier_config:
decision_tree: decision_tree:
max_depth: [1] max_depth: 12
criterion: ["gini"] criterion: "gini"
splitter: ["best"] splitter: "best"
weighted_linear_late_fusion:
entropy_fusion: weights: null
classifiers_names: [["decision_tree"]] classifiers_names: "decision_tree"
classifier_configs:
decision_tree:
max_depth: [1]
criterion: ["gini"]
splitter: ["best"]
disagree_fusion:
classifiers_names: [["decision_tree"]]
classifier_configs:
decision_tree:
max_depth: [1]
criterion: ["gini"]
splitter: ["best"]
double_fault_fusion:
classifiers_names: [["decision_tree"]]
classifier_configs:
decision_tree:
max_depth: [1]
criterion: ["gini"]
splitter: ["best"]
difficulty_fusion:
classifiers_names: [["decision_tree"]]
classifier_configs: classifier_configs:
decision_tree: decision_tree:
max_depth: [1] max_depth: 3
criterion: ["gini"] criterion: "gini"
splitter: ["best"] splitter: "best"
scm_late_fusion:
classifiers_names: [["decision_tree"]]
p: 0.1
max_rules: 10
model_type: 'conjunction'
classifier_configs:
decision_tree: decision_tree:
max_depth: [1] max_depth: 3
criterion: ["gini"]
splitter: ["best"]
majority_voting_fusion: adaboost:
classifiers_names: [["decision_tree", "decision_tree", "decision_tree", ]] base_estimator: "DecisionTreeClassifier"
classifier_configs: n_estimators: 50
decision_tree:
max_depth: [1]
criterion: ["gini"]
splitter: ["best"]
bayesian_inference_fusion:
classifiers_names: [["decision_tree", "decision_tree", "decision_tree", ]]
classifier_configs:
decision_tree:
max_depth: [1]
criterion: ["gini"]
splitter: ["best"]
weighted_linear_late_fusion:
classifiers_names: [["decision_tree", "decision_tree", "decision_tree", ]]
classifier_configs:
decision_tree:
max_depth: [1]
criterion: ["gini"]
splitter: ["best"]
######################################
## The Monoview Classifier arguments #
######################################
mumbo: mumbo:
base_estimator: [null] base_estimator__criterion: 'gini'
n_estimators: [10] base_estimator__max_depth: 3
best_view_mode: ["edge"] base_estimator__random_state: None
base_estimator__splitter: 'best'
lp_norm_mkl: best_view_mode: 'edge'
lmbda: [0.1] base_estimator: 'decision_tree'
n_loops: [50] n_estimators: 10
precision: [0.0001]
kernel: ["rbf"] mucombo:
kernel_params: base_estimator__criterion: 'gini'
gamma: [0.1] base_estimator__max_depth: 3
base_estimator__random_state: None
mvml: base_estimator__splitter: 'best'
reg_params: [[0,1]] best_view_mode: 'edge'
nystrom_param: [1] base_estimator: 'decision_tree'
learn_A: [1] n_estimators: 10
learn_w: [0] #
n_loops: [6] #random_forest:
kernel_types: ["rbf_kernel"] # n_estimators: [25]
kernel_configs: # max_depth: [3]
gamma: [0.1] # criterion: ["entropy"]
#
#svm_linear:
# C: [1]
#
#svm_rbf:
# C: [1]
#
#svm_poly:
# C: [1]
# degree: [2]
#
#adaboost:
# n_estimators: [50]
# base_estimator: ["DecisionTreeClassifier"]
#
#adaboost_pregen:
# n_estimators: [50]
# base_estimator: ["DecisionTreeClassifier"]
# n_stumps: [1]
#
#adaboost_graalpy:
# n_iterations: [50]
# n_stumps: [1]
#
#
#decision_tree_pregen:
# max_depth: [10]
# criterion: ["gini"]
# splitter: ["best"]
# n_stumps: [1]
#
#sgd:
# loss: ["hinge"]
# penalty: [l2]
# alpha: [0.0001]
#
#knn:
# n_neighbors: [5]
# weights: ["uniform"]
# algorithm: ["auto"]
#
#scm:
# model_type: ["conjunction"]
# max_rules: [10]
# p: [0.1]
#
#scm_pregen:
# model_type: ["conjunction"]
# max_rules: [10]
# p: [0.1]
# n_stumps: [1]
#
#cq_boost:
# mu: [0.01]
# epsilon: [1e-06]
# n_max_iterations: [5]
# n_stumps: [1]
#
#cg_desc:
# n_max_iterations: [10]
# n_stumps: [1]
#
#cb_boost:
# n_max_iterations: [10]
# n_stumps: [1]
#
#lasso:
# alpha: [1]
# max_iter: [2]
#
#gradient_boosting:
# n_estimators: [2]
#
#
#######################################
## The Multiview Classifier arguments #
#######################################
#
#weighted_linear_early_fusion:
# view_weights: [null]
# monoview_classifier_name: ["decision_tree"]
# monoview_classifier_config:
# decision_tree:
# max_depth: [1]
# criterion: ["gini"]
# splitter: ["best"]
#
#entropy_fusion:
# classifiers_names: [["decision_tree"]]
# classifier_configs:
# decision_tree:
# max_depth: [1]
# criterion: ["gini"]
# splitter: ["best"]
#
#disagree_fusion:
# classifiers_names: [["decision_tree"]]
# classifier_configs:
# decision_tree:
# max_depth: [1]
# criterion: ["gini"]
# splitter: ["best"]
#
#
#double_fault_fusion:
# classifiers_names: [["decision_tree"]]
# classifier_configs:
# decision_tree:
# max_depth: [1]
# criterion: ["gini"]
# splitter: ["best"]
#
#difficulty_fusion:
# classifiers_names: [["decision_tree"]]
# classifier_configs:
# decision_tree:
# max_depth: [1]
# criterion: ["gini"]
# splitter: ["best"]
#
#scm_late_fusion:
# classifiers_names: [["decision_tree"]]
# p: 0.1
# max_rules: 10
# model_type: 'conjunction'
# classifier_configs:
# decision_tree:
# max_depth: [1]
# criterion: ["gini"]
# splitter: ["best"]
#
#majority_voting_fusion:
# classifiers_names: [["decision_tree", "decision_tree", "decision_tree", ]]
# classifier_configs:
# decision_tree:
# max_depth: [1]
# criterion: ["gini"]
# splitter: ["best"]
#
#bayesian_inference_fusion:
# classifiers_names: [["decision_tree", "decision_tree", "decision_tree", ]]
# classifier_configs:
# decision_tree:
# max_depth: [1]
# criterion: ["gini"]
# splitter: ["best"]
#
#weighted_linear_late_fusion:
# classifiers_names: [["decision_tree", "decision_tree", "decision_tree", ]]
# classifier_configs:
# decision_tree:
# max_depth: [1]
# criterion: ["gini"]
# splitter: ["best"]
#
#mumbo:
# base_estimator: [null]
# n_estimators: [10]
# best_view_mode: ["edge"]
#
#lp_norm_mkl:
# lmbda: [0.1]
# n_loops: [50]
# precision: [0.0001]
# kernel: ["rbf"]
# kernel_params:
# gamma: [0.1]
#
#mvml:
# reg_params: [[0,1]]
# nystrom_param: [1]
# learn_A: [1]
# learn_w: [0]
# n_loops: [6]
# kernel_types: ["rbf_kernel"]
# kernel_configs:
# gamma: [0.1]
No preview for this file type
No preview for this file type
No preview for this file type
No preview for this file type
No preview for this file type
No preview for this file type
No preview for this file type
No preview for this file type
No preview for this file type
No preview for this file type
No preview for this file type
No preview for this file type
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment