This project aims to be an easy-to-use solution to run a prior benchmark on a dataset and evaluate mono- & multi-view algorithms capacity to classify it correctly.
## Getting Started
### Prerequisites (will be automatically installed)
For further information about classifier-specific arguments, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/summit/).
### Dataset compatibility
In order to start a benchmark on your own dataset, you need to format it so SuMMIT can use it. To do so, a [python script](https://gitlab.lis-lab.fr/baptiste.bauvin/summit/-/blob/master/format_dataset.py) is provided.
For more information, see [Example 6](http://baptiste.bauvin.pages.lis-lab.fr/summit/tutorials/example4.html)
### Running on your dataset
Once you have formatted your dataset, to run SuMMIT on it you need to modify the config file as
```yaml
name:["your_file_name"]
*
pathf:"path/to/your/dataset"
```
This will run a full benchmark on your dataset using all available views and labels.
It is highly recommended to follow the documentation's [tutorials](http://baptiste.bauvin.pages.lis-lab.fr/summit/tutorials/index.html) to learn the use of each parameter.