diff --git a/README.md b/README.md index 24ed144217bdf8631f361d8eacb0b2d73c320113..6a91323d722a6d3bc9526a5685dbc196c21391b7 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ [](http://www.gnu.org/licenses/gpl-3.0) -[](https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/badges/develop/pipeline.svg) +[](https://gitlab.lis-lab.fr/baptiste.bauvin/summit/badges/develop/pipeline.svg) # Supervised MultiModal Integration Tool This project aims to be an easy-to-use solution to run a prior benchmark on a dataset and evaluate mono- & multi-view algorithms capacity to classify it correctly. @@ -31,13 +31,13 @@ And the following python modules : ### Installing -Once you cloned the project from the [gitlab repository](https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/), you just have to use : +Once you cloned the project from the [gitlab repository](https://gitlab.lis-lab.fr/baptiste.bauvin/summit/), you just have to use : ``` -cd path/to/multiview-machine-learning-omis/ +cd path/to/summit/ pip install -e . ``` -In the `multiview-machine-learning-omis` directory to install SuMMIT and its dependencies. +In the `summit` directory to install SuMMIT and its dependencies. ### Running on simulated data @@ -46,15 +46,15 @@ In order to run it you'll need to try on **simulated** data with the command from multiview_platform.execute import execute execute() ``` -This will run the first example. For more information about the examples, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/). +This will run the first example. For more information about the examples, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/summit/). Results will be stored in the results directory of the installation path : -`path/to/install/multiview-machine-learning-omis/multiview_platform/examples/results`. +`path/to/install/summit/multiview_platform/examples/results`. The documentation proposes a detailed interpretation of the results. ### Discovering the arguments All the arguments of the platform are stored in a YAML config file. Some config files are given as examples. -The file stored in `multiview-machine-learning-omis/config_files/config.yml` is documented and it is highly recommended +The file stored in `summit/config_files/config.yml` is documented and it is highly recommended to read it carefully before playing around with the parameters. You can create your own configuration file. In order to run the platform with it, run : @@ -63,7 +63,7 @@ from multiview_platform.execute import execute execute(config_path="/absolute/path/to/your/config/file") ``` -For further information about classifier-specific arguments, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/). +For further information about classifier-specific arguments, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/summit/). ### Dataset compatibility @@ -107,7 +107,7 @@ pathf: "path/to/your/dataset" ``` This will run a full benchmark on your dataset using all available views and labels. -It is highly recommended to follow the documentation's [tutorials](http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/tutorials/index.html) to learn the use of each parameter. +It is highly recommended to follow the documentation's [tutorials](http://baptiste.bauvin.pages.lis-lab.fr/summit/tutorials/index.html) to learn the use of each parameter. ## Author diff --git a/docs/source/conf.py b/docs/source/conf.py index e3db2f62c636a6856c1ed00b8b2adeb4327167fa..4dbeae898c9bf7016625eba8a854acba6bd40c54 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -195,5 +195,5 @@ rst_prolog = """ """ -extlinks = {'base_source': ('https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/-/tree/master/', "base_source"), - 'base_doc': ('http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/', 'base_doc')} +extlinks = {'base_source': ('https://gitlab.lis-lab.fr/baptiste.bauvin/summit/-/tree/master/', "base_source"), + 'base_doc': ('http://baptiste.bauvin.pages.lis-lab.fr/summit/', 'base_doc')} diff --git a/docs/source/tutorials/example1.rst b/docs/source/tutorials/example1.rst index 7ee4cdf914ac9a22abb615171bb1e86976772c46..1b2ee02ec0fd513edc15040649cfecc184bf0faa 100644 --- a/docs/source/tutorials/example1.rst +++ b/docs/source/tutorials/example1.rst @@ -70,7 +70,7 @@ The config file that will be used in this example is available :base_source:`her - :yaml:`name: ["summit_doc"]` (:base_source:`l6 <multiview_platform/examples/config_files/config_example_1.yml#L6>`) uses the plausible simulated dataset, - :yaml:`random_state: 42` (:base_source:`l18 <multiview_platform/examples/config_files/config_example_1.yml#L18>`) fixes the seed of the random state for this benchmark, it is useful for reproductibility, - :yaml:`full: True` (:base_source:`l22 <multiview_platform/examples/config_files/config_example_1.yml#L22>`) means the benchmark will use the full dataset, - - :yaml:`res_dir: "examples/results/example_1/"` (:base_source:`l26 <multiview_platform/examples/config_files/config_example_1.yml#L26>`) saves the results in ``multiview-machine-learning-omis/multiview_platform/examples/results/example_1`` + - :yaml:`res_dir: "examples/results/example_1/"` (:base_source:`l26 <multiview_platform/examples/config_files/config_example_1.yml#L26>`) saves the results in ``summit/multiview_platform/examples/results/example_1`` + Then the classification-related arguments : diff --git a/docs/source/tutorials/hps_theory.rst b/docs/source/tutorials/hps_theory.rst index f365571398b4f71bf029a1f506a7d09513f7dfec..5da342f464924590ae72a3cb924f027268f0b71d 100644 --- a/docs/source/tutorials/hps_theory.rst +++ b/docs/source/tutorials/hps_theory.rst @@ -43,7 +43,7 @@ Understanding hyper-parameter optimization As hyper-parameters are task dependant, there are three ways in the platform to set their value : -- If you know the value (or a set of values), specify them at the end of the config file for each algorithm you want to test, and use :yaml:`hps_type: 'None'` in the `config file <https://gitlab.lis-lab.fr/baptiste.bauvin/multiview-machine-learning-omis/-/blob/master/multiview_platform/examples/config_files/config_example_2_1_1.yml#L61>`_. This will bypass the optimization process to run the algorithm on the specified values. +- If you know the value (or a set of values), specify them at the end of the config file for each algorithm you want to test, and use :yaml:`hps_type: 'None'` in the :base_source:`config file <multiview_platform/examples/config_files/config_example_2_1_1.yml#L61>`. This will bypass the optimization process to run the algorithm on the specified values. - If you have several possible values in mind, specify them in the config file and use ``hps_type: 'Grid'`` to run a grid search on the possible values. - If you have no ideas on the values, the platform proposes a random search for hyper-parameter optimization. diff --git a/docs/source/tutorials/installation.rst b/docs/source/tutorials/installation.rst index de5ab85a47e371689c242a8e2af8892dd8514802..59496377bb2d6b1b2d7ef4a968eb1b3ddcf29e92 100644 --- a/docs/source/tutorials/installation.rst +++ b/docs/source/tutorials/installation.rst @@ -13,7 +13,7 @@ To sum up what you need to run the platform : Launching the setup tool ------------------------ -To install |platf|, it is recommended to use a virtual environment. Then, run in a terminal the following command, in the ``multiview-machine-learning-omis`` directory +To install |platf|, it is recommended to use a virtual environment. Then, run in a terminal the following command, in the ``summit`` directory .. code-block:: shell