Skip to content
Snippets Groups Projects
Commit 033b3662 authored by Baptiste Bauvin's avatar Baptiste Bauvin
Browse files

Summit rename

parent d4f35e85
No related branches found
No related tags found
No related merge requests found
[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](http://www.gnu.org/licenses/gpl-3.0) [![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](http://www.gnu.org/licenses/gpl-3.0)
[![Build Status](https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/badges/develop/pipeline.svg)](https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/badges/develop/pipeline.svg) [![Build Status](https://gitlab.lis-lab.fr/baptiste.bauvin/summit/badges/develop/pipeline.svg)](https://gitlab.lis-lab.fr/baptiste.bauvin/summit/badges/develop/pipeline.svg)
# Supervised MultiModal Integration Tool # Supervised MultiModal Integration Tool
This project aims to be an easy-to-use solution to run a prior benchmark on a dataset and evaluate mono- & multi-view algorithms capacity to classify it correctly. This project aims to be an easy-to-use solution to run a prior benchmark on a dataset and evaluate mono- & multi-view algorithms capacity to classify it correctly.
...@@ -31,13 +31,13 @@ And the following python modules : ...@@ -31,13 +31,13 @@ And the following python modules :
### Installing ### Installing
Once you cloned the project from the [gitlab repository](https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/), you just have to use : Once you cloned the project from the [gitlab repository](https://gitlab.lis-lab.fr/baptiste.bauvin/summit/), you just have to use :
``` ```
cd path/to/multiview-machine-learning-omis/ cd path/to/summit/
pip install -e . pip install -e .
``` ```
In the `multiview-machine-learning-omis` directory to install SuMMIT and its dependencies. In the `summit` directory to install SuMMIT and its dependencies.
### Running on simulated data ### Running on simulated data
...@@ -46,15 +46,15 @@ In order to run it you'll need to try on **simulated** data with the command ...@@ -46,15 +46,15 @@ In order to run it you'll need to try on **simulated** data with the command
from multiview_platform.execute import execute from multiview_platform.execute import execute
execute() execute()
``` ```
This will run the first example. For more information about the examples, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/). This will run the first example. For more information about the examples, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/summit/).
Results will be stored in the results directory of the installation path : Results will be stored in the results directory of the installation path :
`path/to/install/multiview-machine-learning-omis/multiview_platform/examples/results`. `path/to/install/summit/multiview_platform/examples/results`.
The documentation proposes a detailed interpretation of the results. The documentation proposes a detailed interpretation of the results.
### Discovering the arguments ### Discovering the arguments
All the arguments of the platform are stored in a YAML config file. Some config files are given as examples. All the arguments of the platform are stored in a YAML config file. Some config files are given as examples.
The file stored in `multiview-machine-learning-omis/config_files/config.yml` is documented and it is highly recommended The file stored in `summit/config_files/config.yml` is documented and it is highly recommended
to read it carefully before playing around with the parameters. to read it carefully before playing around with the parameters.
You can create your own configuration file. In order to run the platform with it, run : You can create your own configuration file. In order to run the platform with it, run :
...@@ -63,7 +63,7 @@ from multiview_platform.execute import execute ...@@ -63,7 +63,7 @@ from multiview_platform.execute import execute
execute(config_path="/absolute/path/to/your/config/file") execute(config_path="/absolute/path/to/your/config/file")
``` ```
For further information about classifier-specific arguments, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/). For further information about classifier-specific arguments, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/summit/).
### Dataset compatibility ### Dataset compatibility
...@@ -107,7 +107,7 @@ pathf: "path/to/your/dataset" ...@@ -107,7 +107,7 @@ pathf: "path/to/your/dataset"
``` ```
This will run a full benchmark on your dataset using all available views and labels. This will run a full benchmark on your dataset using all available views and labels.
It is highly recommended to follow the documentation's [tutorials](http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/tutorials/index.html) to learn the use of each parameter. It is highly recommended to follow the documentation's [tutorials](http://baptiste.bauvin.pages.lis-lab.fr/summit/tutorials/index.html) to learn the use of each parameter.
## Author ## Author
......
...@@ -195,5 +195,5 @@ rst_prolog = """ ...@@ -195,5 +195,5 @@ rst_prolog = """
""" """
extlinks = {'base_source': ('https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/-/tree/master/', "base_source"), extlinks = {'base_source': ('https://gitlab.lis-lab.fr/baptiste.bauvin/summit/-/tree/master/', "base_source"),
'base_doc': ('http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/', 'base_doc')} 'base_doc': ('http://baptiste.bauvin.pages.lis-lab.fr/summit/', 'base_doc')}
...@@ -70,7 +70,7 @@ The config file that will be used in this example is available :base_source:`her ...@@ -70,7 +70,7 @@ The config file that will be used in this example is available :base_source:`her
- :yaml:`name: ["summit_doc"]` (:base_source:`l6 <multiview_platform/examples/config_files/config_example_1.yml#L6>`) uses the plausible simulated dataset, - :yaml:`name: ["summit_doc"]` (:base_source:`l6 <multiview_platform/examples/config_files/config_example_1.yml#L6>`) uses the plausible simulated dataset,
- :yaml:`random_state: 42` (:base_source:`l18 <multiview_platform/examples/config_files/config_example_1.yml#L18>`) fixes the seed of the random state for this benchmark, it is useful for reproductibility, - :yaml:`random_state: 42` (:base_source:`l18 <multiview_platform/examples/config_files/config_example_1.yml#L18>`) fixes the seed of the random state for this benchmark, it is useful for reproductibility,
- :yaml:`full: True` (:base_source:`l22 <multiview_platform/examples/config_files/config_example_1.yml#L22>`) means the benchmark will use the full dataset, - :yaml:`full: True` (:base_source:`l22 <multiview_platform/examples/config_files/config_example_1.yml#L22>`) means the benchmark will use the full dataset,
- :yaml:`res_dir: "examples/results/example_1/"` (:base_source:`l26 <multiview_platform/examples/config_files/config_example_1.yml#L26>`) saves the results in ``multiview-machine-learning-omis/multiview_platform/examples/results/example_1`` - :yaml:`res_dir: "examples/results/example_1/"` (:base_source:`l26 <multiview_platform/examples/config_files/config_example_1.yml#L26>`) saves the results in ``summit/multiview_platform/examples/results/example_1``
+ Then the classification-related arguments : + Then the classification-related arguments :
......
...@@ -43,7 +43,7 @@ Understanding hyper-parameter optimization ...@@ -43,7 +43,7 @@ Understanding hyper-parameter optimization
As hyper-parameters are task dependant, there are three ways in the platform to set their value : As hyper-parameters are task dependant, there are three ways in the platform to set their value :
- If you know the value (or a set of values), specify them at the end of the config file for each algorithm you want to test, and use :yaml:`hps_type: 'None'` in the `config file <https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/-/blob/master/multiview_platform/examples/config_files/config_example_2_1_1.yml#L61>`_. This will bypass the optimization process to run the algorithm on the specified values. - If you know the value (or a set of values), specify them at the end of the config file for each algorithm you want to test, and use :yaml:`hps_type: 'None'` in the :base_source:`config file <multiview_platform/examples/config_files/config_example_2_1_1.yml#L61>`. This will bypass the optimization process to run the algorithm on the specified values.
- If you have several possible values in mind, specify them in the config file and use ``hps_type: 'Grid'`` to run a grid search on the possible values. - If you have several possible values in mind, specify them in the config file and use ``hps_type: 'Grid'`` to run a grid search on the possible values.
- If you have no ideas on the values, the platform proposes a random search for hyper-parameter optimization. - If you have no ideas on the values, the platform proposes a random search for hyper-parameter optimization.
......
...@@ -13,7 +13,7 @@ To sum up what you need to run the platform : ...@@ -13,7 +13,7 @@ To sum up what you need to run the platform :
Launching the setup tool Launching the setup tool
------------------------ ------------------------
To install |platf|, it is recommended to use a virtual environment. Then, run in a terminal the following command, in the ``multiview-machine-learning-omis`` directory To install |platf|, it is recommended to use a virtual environment. Then, run in a terminal the following command, in the ``summit`` directory
.. code-block:: shell .. code-block:: shell
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment