diff --git a/README.md b/README.md
index 4abe29dfb096f30e69af5937f172d43c3a041062..676a34ca83fd737a6e5122c7ae6f1c8085ab767e 100644
--- a/README.md
+++ b/README.md
@@ -6,7 +6,7 @@ This project aims to be an easy-to-use solution to run a prior benchmark on a da
 
 ## Getting Started
 
-### Prerequisites
+### Prerequisites (will be automatically installed)
 
 To be able to use this project, you'll need :
 
@@ -45,7 +45,7 @@ In order to run it you'll need to try on **simulated** data with the command
 from multiview_platform.execute import execute
 execute()
 ```
-This will run the first example. For more information about the examples, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/) 
+This will run the first example. For more information about the examples, see the [documentation](http://baptiste.bauvin.pages.lis-lab.fr/multiview-machine-learning-omis/).
 Results will be stored in the results directory of the installation path : 
 `path/to/install/multiview-machine-learning-omis/multiview_platform/examples/results`.
 The documentation proposes a detailed interpretation of the results. 
diff --git a/docs/source/index.rst b/docs/source/index.rst
index a0e7fc57c6adca789436547cd37a7e581f29db95..a205aa032714f4b610b58db90a7058b1bba711b0 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -1,5 +1,5 @@
-Welcome to Supervised MultiModal Integration Tool's documentation !
-===================================================================
+Welcome to Supervised MultiModal Integration Tool's documentation
+=================================================================
 
 This package has been designed as an easy-to-use platform to estimate different mono- and multi-view classifiers' performances on a multiview dataset.
 
diff --git a/docs/source/tutorials/example0.rst b/docs/source/tutorials/example0.rst
index 9a1baeeae5acec6f71e1595d0ff8fa8bfa81688b..d7e178f93967e291379f764e1bec6c94ffd66196 100644
--- a/docs/source/tutorials/example0.rst
+++ b/docs/source/tutorials/example0.rst
@@ -66,7 +66,7 @@ This is possible with another result analysis, available in :base_source:`png <m
     .. :file: images/example_0/err.html
     :file: images/fake.html
 
-This figure represents a matrix, with the examples in rows and classifiers in columns, with a white rectangle on row i, column j if classifier j failed to classify example i.
+This figure represents a matrix, with the examples in rows and classifiers in columns, with a white rectangle on row i, column j if classifier j succeerecded to classify example i.
 
 A quick analysis of it shows that a decision tree (DT) on the view ``digit_col_grad_0`` is unable to classify any example of labels 1, 2, 3 or 4. That both the other DTs have a similar behavior with other labels.
 Concerning the fusions, if you zoom in on the examples labelled "2"", you may see that some errors made by the early fusion classifier are on examples that were mis-classified by the three DTs :
diff --git a/docs/source/tutorials/example1.rst b/docs/source/tutorials/example1.rst
index 2c8e481d3cd1b50abc0073b1ab34406a030c8f7e..18081bbe2ed963c42030f0f8be361a35dc83a341 100644
--- a/docs/source/tutorials/example1.rst
+++ b/docs/source/tutorials/example1.rst
@@ -46,6 +46,8 @@ It has been parametrized with the following error matrix :
 +---------+--------+--------+--------+--------+
 | label_7 |  0.40  |  0.40  |  0.40  |  0.40  |
 +---------+--------+--------+--------+--------+
+| label_7 |  0.40  |  0.40  |  0.40  |  0.40  |
++---------+--------+--------+--------+--------+
 
 So this means that the view 1 should make at least 40% error on label 1 and 65% on label 2.
 
@@ -72,10 +74,10 @@ The config file that will be used in this example is available :base_source:`her
 
 + Then the classification-related arguments :
 
-    - :yaml:`split: 0.25` (:base_source:`l35 <multiview_platform/examples/config_files/config_example_1.yml#L35>`) means that 80% of the dataset will be used to test the different classifiers and 20% to train them,
+    - :yaml:`split: 0.25` (:base_source:`l35 <multiview_platform/examples/config_files/config_example_1.yml#L35>`) means that 75% of the dataset will be used to test the different classifiers and 25% to train them,
     - :yaml:`type: ["monoview", "multiview"]` (:base_source:`l43 <multiview_platform/examples/config_files/config_example_1.yml#L43>`) allows for monoview and multiview algorithms to be used in the benchmark,
     - :yaml:`algos_monoview: ["decision_tree"]` (:base_source:`l45 <multiview_platform/examples/config_files/config_example_1.yml#L45>`) runs a Decision tree on each view,
-    - :yaml:`algos_monoview: ["weighted_linear_early_fusion", "weighted_linear_late_fusion"]` (:base_source:`l47 <multiview_platform/examples/config_files/config_example_1.yml#L47>`) runs a late and an early fusion,
+    - :yaml:`algos_multiview: ["weighted_linear_early_fusion", "weighted_linear_late_fusion"]` (:base_source:`l47 <multiview_platform/examples/config_files/config_example_1.yml#L47>`) runs a late and an early fusion,
     - The metrics configuration (:base_source:`l52-55 <multiview_platform/examples/config_files/config_example_1.yml#L52>`) ::
 
                         metrics:
@@ -101,6 +103,7 @@ The execution should take less than five minutes. We will first analyze the resu
 The result structure can be startling at first, but, as the platform provides a lot of information, it has to be organized.
 
 The results are stored in :base_source:`a directory <multiview_platform/examples/results/example_1/>`. Here, you will find a directory with the name of the database used for the benchmark, here : ``summit_doc/``
+
 Finally, a directory with the date and time of the beginning of the experiment. Let's say you started the benchmark on the 25th of December 1560, at 03:42 PM, the directory's name should be ``started_1560_12_25-15_42/``.
 
 From here the result directory has the structure that follows  :
@@ -156,7 +159,7 @@ Let's comment each file :
 ``*-accuracy_score*.html``, ``*-accuracy_score*.png`` and ``*-accuracy_score*.csv``
 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
 
-These files contain the scores of each classifier for the accuracy metric, ordered with the best ones on the right and the worst ones on the left, as an interactive html page, an image or a csv matrix. The star after ``accuracy_score*`` means that it was the principal metric (the usefulness of the principal metric will be explained later).
+These files contain the scores of each classifier for the accuracy metric, ordered with the worst ones on the left and the best ones on the right, as an interactive html page, an image or a csv matrix. The star after ``accuracy_score*`` means that it was the principal metric (the usefulness of the principal metric will be explained later).
 The html version is as follows :
 
 .. raw:: html
@@ -167,7 +170,7 @@ This is a bar plot showing the score on the training set (light gray), and testi
 
 Here, the generated dataset is build to introduce some complementarity amongst the views. As a consequence, the two multiview algorithms, even if they are naive, have a better score than the decision trees.
 
-The ``.csv`` file is a matrix with the score on train stored in the first row and the score on test stored in the second one. Each classifier is presented in a row. It is loadable with pandas.
+The ``.csv`` file is a matrix with the score on train stored in the first row and the score on test stored in the second one. Each classifier is presented in a column. It is loadable with pandas.
 
 A similar graph ``*-accuracy_score*-class.html``, reports the error of each classifier on each class.
 
@@ -175,7 +178,7 @@ A similar graph ``*-accuracy_score*-class.html``, reports the error of each clas
     .. :file: ./images/example_1/accuracy_class.html
     :file: images/fake.html
 
-Here, for each classifier, 8 bars are plotted, one foe each class. It is clear that fore the monoview algorithms, in views 2 and 3, the third class is difficult, as showed in the error matrix.
+Here, for each classifier, 8 bars are plotted, one for each class. It is clear that for the monoview algorithms, in views 2 and 3, the third class is difficult, as showed in the error matrix.
 
 
 ``*-error_analysis_2D.png`` and ``*-error_analysis_2D.html``
@@ -210,9 +213,9 @@ In terms of information, this is useful to detect possible outlier examples in t
 For example, a mainly black horizontal line for an example means that it has been missclassified by most of the classifiers.
 It could mean that the example is incorrectly labeled in the dataset or is very hard to classify.
 
-Symmetrically, a mainly-black column means that a classifier spectacularly failed on the asked task.
+Symmetrically, a mainly-black column means that a classifier spectacularly failed.
 
-The data used to generate those matrices is available in ``*-2D_plot_data.csv``
+The data used to generate this matrix is available in ``*-2D_plot_data.csv``
 
 ``*-error_analysis_bar.png`` and ``*-error_analysis_bar.html``
 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
diff --git a/docs/source/tutorials/example2.rst b/docs/source/tutorials/example2.rst
index 67bf4a631585969b45ee93617beb3080fefb5841..4147a605a7eb350595cc297307227b7f6e28aa62 100644
--- a/docs/source/tutorials/example2.rst
+++ b/docs/source/tutorials/example2.rst
@@ -5,7 +5,7 @@
 Example 2 : Understanding the hyper-parameter optimization
 ==========================================================
 
-If you are not familir with hyper-parameter optimization, see :base_doc:`Hyper-parameters 101 <tutorials/hps_theory.html>`
+If you are not familiar with hyper-parameter optimization, see :base_doc:`Hyper-parameters 101 <tutorials/hps_theory.html>`
 
 Hands-on experience
 -------------------
@@ -118,21 +118,22 @@ Conclusion
 >>>>>>>>>>
 
 The split ratio has two consequences :
+
 - Increasing the test set size decreases the information available in the train set size so either it helps to avoid overfitting (Adaboost) or it can hide useful information to the classifier and therefor decrease its performance (decision tree),
-- The second consequence is that decreasing test size will increase the benchmark duration as the classifier will have to learn  on more examples, this duration modification is higher if the dataset has high dimensionality and if the algorithm is algorithmically complex.
+- The second consequence is that increasing train size will increase the benchmark duration as the classifier will have to learn  on more examples, this duration modification is higher if the dataset has high dimensionality and if the algorithm is algorithmically complex.
 
 .. _random:
 Example 2.2 : Usage of randomized hyper-parameter optimization :
 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
 
-In the previous example, we have seen that the split ratio has an impact on the train duration and performance of the algorithms, b the most time-consuming task is optimizing their hyper parameters.
+In the previous example, we have seen that the split ratio has an impact on the train duration and performance of the algorithms, but the most time-consuming task is optimizing their hyper parameters.
 
 For all the previous examples, the platform used the hyper-parameters values given in the config file.
 This is only useful if one knows the optimal combination of hyper-parameter for the given task.
 
 However, most of the time, they are unknown to the user, and then have to be optimized by the platform.
 
-In this example, we will use an randomized search, one of the two hyper-parameter optimization methods implemented in |platf|, to do so we will go through five lines of the config file :
+In this example, we will use an randomized search, one of the two hyper-parameter optimization methods implemented in |platf|. To do so we will go through five lines of the config file :
 
 - :yaml:`hps_type:`, controlling the type of hyper-parameter search,
 - :yaml:`n_iter:`, controlling the number of random draws during the hyper-parameter search,
@@ -206,7 +207,7 @@ Here, we used :yaml:`split: 0.8` and the results are far better than :base_doc:`
 
 
 
-The choice made here is to allow a different amount of draws for mono and multiview classifiers. However, allowing the same number of draws to both is also available by setting :yaml:` equivalent_draws: False`.
+The choice made here is to allow a different amount of draws for mono and multiview classifiers. However, allowing the same number of draws to both is also available by setting :yaml:`equivalent_draws: False`.
 
 .. note::
 
diff --git a/docs/source/tutorials/example3.rst b/docs/source/tutorials/example3.rst
index 8f14d07e09295091471447bef6345aad89a70b6d..d9f90322a6ef4573ce9bd6fee9903a69c141d9db 100644
--- a/docs/source/tutorials/example3.rst
+++ b/docs/source/tutorials/example3.rst
@@ -6,8 +6,10 @@ Context
 -------
 
 In the previous example, we have seen that in order to output meaningful results, the platform splits the input dataset in a training set and a testing set.
+
 However, even if the split is done at random, one can draw a lucky (or unlucky) split and have great (or poor) performance on this specific split.
-To settle this issue, the platform can run on multiple splits and return the mean.
+
+To settle this issue, the platform can run on multiple splits and return the mean scores.
 
 
 How to use it
diff --git a/docs/source/tutorials/example5.rst b/docs/source/tutorials/example5.rst
index 4461b9996fc2bff98bc235a6f8d810c9203f41bd..ba04eb9dbd9cc4bbf1707f232838f1dc8658ddca 100644
--- a/docs/source/tutorials/example5.rst
+++ b/docs/source/tutorials/example5.rst
@@ -37,7 +37,7 @@ Indeed, all the algorithms included in the platform must provide two hyper-param
 - :python:`self.param_names` that contain the name of the hyper-parameters that have to be optimized (they must correspond to the name of the attributes of the class :python:`Algo`)
 - :python:`self.distribs` that contain the distributions for each of these hyper-parameters.
 
-For example, let's suppose that algo need three hyper-parameters and a random state parameter allowing reproducibility :
+For example, let's suppose that |algo| need three hyper-parameters and a random state parameter allowing reproducibility :
 
 - :python:`trade_off` that is a float between 0 and 1,
 - :python:`norm_type` that is a string in :python:`["l1", "l2"]`,
@@ -78,7 +78,7 @@ It is possible to provide some information about the decision process of the alg
 It inputs four arguments :
 
 * :python:`directory`, a string containing the directory where figures should be sotred
-* :python:`base_file_name`, a string containing the file name prefix that shoul be used to sotre figures
+* :python:`base_file_name`, a string containing the file name prefix that should be used to store figures
 * :python:`y_test`, an array containing the labels of the test set
 * :python:`multiclass` a boolean that is True if the target is multiclass
 
@@ -127,8 +127,8 @@ Moreover, one has to add a variable called :python:`classifier_class_name` that
                 self.param_names = ["param_1", "random_state", "param_2"]
                 self.distribs = [CustomRandint(5,200), [random_state], ["val_1", "val_2"]]
 
-In |platf| the input of the :python:`fit()` method is `X`, a dataset object that provide access to each view with a method : :python:`dataset_var.get_v(view_index, example_indices)`,
-so in order to add a mutliview classifier to |platf|, one will probably have to add a data-transformation step before using the class's :python:`fit()` method.
+In |platf| the input of the :python:`fit()` method is `X`, a dataset object that provide access to each view with a method : :python:`dataset_var.get_v(view_index, example_indices)`.
+So in order to add a mutliview classifier to |platf|, one will probably have to add a data-transformation step before using the class's :python:`fit()` method.
 
 Moreover, to get restrain the examples and descriptors used in the method, |platf| provides two supplementary arguments :
 
diff --git a/docs/source/tutorials/images/example_3/gray.png b/docs/source/tutorials/images/example_3/gray.png
new file mode 100644
index 0000000000000000000000000000000000000000..cd69ba179151e37438710ab810e3bcff78323001
Binary files /dev/null and b/docs/source/tutorials/images/example_3/gray.png differ
diff --git a/docs/source/tutorials/installation.rst b/docs/source/tutorials/installation.rst
index 57cc4154771b71688e595807e4d4750e80486b76..de5ab85a47e371689c242a8e2af8892dd8514802 100644
--- a/docs/source/tutorials/installation.rst
+++ b/docs/source/tutorials/installation.rst
@@ -3,7 +3,7 @@
 Install |platf|
 =======================================
 
-Multiview Platform is a package developped for Python3.x.
+|platf| is a package developped for Python3.x.
 
 To sum up what you need to run the platform :