ExecClassifMonoView.py
First steps with Multiview platform
Context
This platform aims at running multiple state-of-the-art classifiers on a multiview dataset in a classification context. It has been developed in order to get a baseline on common algorithms for any classification task.
Adding a new classifier (monoview and/or multiview) as been made as simple as possible in order for users to be able to customize the set of classifiers and test their performances in a controlled environment.
Introduction to this tutorial
This tutorial will show you how to use the platform on simulated data, for the simplest problem : biclass classification.
The data is naively generated TODO : Keep the same generator ?
Getting started
Importing the platform's execution function
>>> from multiview_platform.execute import execute
Understanding the config file
The config file that will be used in this example is located in multiview-machine-learning-omis/multiview_platform/examples/config_files/config_exmaple_1.yml
We will decrypt the main arguments :
The first part of the arguments are the basic ones :
- log: True
allows to print the log in the terminal,
- name: ["plausible"]
uses the plausible simulated dataset,
- random_state: 42
fixes the random state of this benchmark, it is useful for reproductibility,
- full: True
the benchmark will used the full dataset,
- res_dir: "examples/results/example_1/"
the results will be saved in multiview-machine-learning-omis/multiview_platform/examples/results/example_1
Then the classification-related arguments
- split: 0.8
means that 80% of the dataset will be used to test the different classifiers and 20% to train them
- type: ["monoview", "multiview"]
allows for monoview and multiview algorithms to be used in the benchmark
- algos_monoview: ["all"]
runs on all the available monoview algorithms (same for algos_muliview
)
- metrics: ["accuracy_score", "f1_score"]
means that the benchmark will evaluate the performance of each algortihms on these two metrics.
Then, the two following categories are algorithm-related and contain all the default values for the hyper-parameters.
Start the benchmark
During the whole benchmark, the log file will be printed in the terminal. To start the benchmark run :