diff --git a/docs/source/tutorials/example0.rst b/docs/source/tutorials/example0.rst index 20129c87240a036fcb3b0f8023185773365f19be..9a1baeeae5acec6f71e1595d0ff8fa8bfa81688b 100644 --- a/docs/source/tutorials/example0.rst +++ b/docs/source/tutorials/example0.rst @@ -45,7 +45,8 @@ The file that regroups the accuracy scores is available in three versions : - and an :base_source:`an html interactive file <multiview_platform/examples/results/example_0/digits/result_example/digits-accuracy_score*.html>` : .. raw:: html - :file: images/example_0/acc.html + .. :file: images/example_0/acc.html + :file: images/fake.html These three files contain the same information : the two figures are bar plots of the score of each classifier with the score on the training set in light gray and the score on the testing set in black. @@ -62,7 +63,8 @@ Once one has the scores of each classifier, an interesting analysis could be to This is possible with another result analysis, available in :base_source:`png <multiview_platform/examples/results/example_0/digits/result_example/digits-error_analysis_2D.png>`, :base_source:`csv <multiview_platform/examples/results/example_0/digits/result_example/digits_2D_plot_data.csv>` and :base_source:`html <multiview_platform/examples/results/example_0/digits/result_example/digits-error_analysis_2D.html>` : .. raw:: html - :file: images/example_0/err.html + .. :file: images/example_0/err.html + :file: images/fake.html This figure represents a matrix, with the examples in rows and classifiers in columns, with a white rectangle on row i, column j if classifier j failed to classify example i. diff --git a/docs/source/tutorials/example1.rst b/docs/source/tutorials/example1.rst index e650ce6f811d565aeb443df5ede6c810ffa85261..2c8e481d3cd1b50abc0073b1ab34406a030c8f7e 100644 --- a/docs/source/tutorials/example1.rst +++ b/docs/source/tutorials/example1.rst @@ -160,7 +160,8 @@ These files contain the scores of each classifier for the accuracy metric, order The html version is as follows : .. raw:: html - :file: ./images/example_1/accuracy.html + .. :file: ./images/example_1/accuracy.html + :file: images/fake.html This is a bar plot showing the score on the training set (light gray), and testing set (black) for each monoview classifier on each view and or each multiview classifier. @@ -171,7 +172,8 @@ The ``.csv`` file is a matrix with the score on train stored in the first row an A similar graph ``*-accuracy_score*-class.html``, reports the error of each classifier on each class. .. raw:: html - :file: ./images/example_1/accuracy_class.html + .. :file: ./images/example_1/accuracy_class.html + :file: images/fake.html Here, for each classifier, 8 bars are plotted, one foe each class. It is clear that fore the monoview algorithms, in views 2 and 3, the third class is difficult, as showed in the error matrix. @@ -191,7 +193,8 @@ The examples labelled as ``Mutual_error_*`` are mis-classified by most of the al It is highly recommended to zoom in the html figure to see each row. .. raw:: html - :file: ./images/example_1/error_2d.html + .. :file: ./images/example_1/error_2d.html + :file: images/fake.html @@ -217,7 +220,8 @@ The data used to generate those matrices is available in ``*-2D_plot_data.csv`` This file is a different way to visualize the same information as the two previous ones. Indeed, it is a bar plot, with a bar for each example, counting the ratio of classifiers that failed to classify this particular example. .. raw:: html - :file: ./images/example_1/bar.html + .. :file: ./images/example_1/bar.html + :file: images/fake.html All the spikes are the mutual error examples, the complementary ones are the 0.33 bars and the redundant are the empty spaces. diff --git a/docs/source/tutorials/example2.rst b/docs/source/tutorials/example2.rst index 18ac65b09434529af3e048bb72b02e4bb1fc0cf1..397b6712c7e7cffd52eedf863263322ce7362b96 100644 --- a/docs/source/tutorials/example2.rst +++ b/docs/source/tutorials/example2.rst @@ -85,7 +85,8 @@ To run this example run, The results for accuracy metric are stored in ``multiview_platform/examples/results/example_2_1_1/doc_summit/`` .. raw:: html - :file: ./images/example_2/2_1/low_train_acc.html + .. :file: ./images/example_2/2_1/low_train_acc.html + :file: images/fake.html These results were generated learning on 20% of the dataset and testing on 80% (see the :base_source:`config file <multiview_platform/examples/config_files/config_example_2_1_1.yml#L37>`). @@ -104,7 +105,8 @@ Now, if you run : You should obtain these scores in ``multiview_platform/examples/results/example_2_1/doc_summit/`` : .. raw:: html - :file: ./images/example_2/2_1/high_train_accs.html + .. :file: ./images/example_2/2_1/high_train_accs.html + :file: images/fake.html Here we learned on 80% of the dataset and tested on 20%, so the line in the :base_source:`config file <multiview_platform/examples/config_files/config_example_2_1_2.yml#L37>` has become ``split: 0.2``. @@ -199,7 +201,8 @@ Here, we used :yaml:`split: 0.8` and the results are far better than :base_doc:` .. raw:: html - :file: ./images/example_2/2_2/acc_random_search.html + .. :file: ./images/example_2/2_2/acc_random_search.html + :file: images/fake.html @@ -227,7 +230,8 @@ with different fold/draws settings : .. raw:: html - :file: ./images/durations.html + .. :file: ./images/durations.html + :file: images/fake.html .. note:: diff --git a/docs/source/tutorials/images/error_2D.html b/docs/source/tutorials/images/fake.html similarity index 100% rename from docs/source/tutorials/images/error_2D.html rename to docs/source/tutorials/images/fake.html