Skip to content
Snippets Groups Projects
Commit 122d71c1 authored by Baptiste Bauvin's avatar Baptiste Bauvin
Browse files

Revisions

parent c33aa00e
No related branches found
No related tags found
No related merge requests found
Pipeline #8599 failed
Showing
with 195 additions and 65 deletions
...@@ -21,10 +21,10 @@ Documentation ...@@ -21,10 +21,10 @@ Documentation
reference/api
tutorial/install_devel tutorial/install_devel
tutorial/auto_examples/index tutorial/auto_examples/index
tutorial/times tutorial/times
reference/api
tutorial/credits tutorial/credits
......
No preview for this file type
No preview for this file type
:orphan: :orphan:
.. _sphx_glr_tutorial_auto_examples_cumbo_sg_execution_times: .. _sphx_glr_tutorial_auto_examples_combo_sg_execution_times:
Computation times Computation times
================= =================
**00:01.102** total execution time for **tutorial_auto_examples_cumbo** files: **00:03.474** total execution time for **tutorial_auto_examples_combo** files:
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_cumbo_plot_cumbo_2_views_2_classes.py` (``plot_cumbo_2_views_2_classes.py``) | 00:00.603 | 0.0 MB | | :ref:`sphx_glr_tutorial_auto_examples_combo_plot_combo_2_views_2_classes.py` (``plot_combo_2_views_2_classes.py``) | 00:02.387 | 0.0 MB |
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_cumbo_plot_cumbo_3_views_3_classes.py` (``plot_cumbo_3_views_3_classes.py``) | 00:00.499 | 0.0 MB | | :ref:`sphx_glr_tutorial_auto_examples_combo_plot_combo_3_views_3_classes.py` (``plot_combo_3_views_3_classes.py``) | 00:01.088 | 0.0 MB |
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------------------------------------------------+-----------+--------+
...@@ -19,7 +19,7 @@ Multimodal Examples ...@@ -19,7 +19,7 @@ Multimodal Examples
.. _sphx_glr_tutorial_auto_examples_cumbo: .. _sphx_glr_tutorial_auto_examples_combo:
MuCuMBo Examples MuCuMBo Examples
...@@ -37,9 +37,9 @@ cooperation between views for classification. ...@@ -37,9 +37,9 @@ cooperation between views for classification.
.. only:: html .. only:: html
.. figure:: /tutorial/auto_examples/cumbo/images/thumb/sphx_glr_plot_cumbo_2_views_2_classes_thumb.png .. figure:: /tutorial/auto_examples/combo/images/thumb/sphx_glr_plot_combo_2_views_2_classes_thumb.png
:ref:`sphx_glr_tutorial_auto_examples_cumbo_plot_cumbo_2_views_2_classes.py` :ref:`sphx_glr_tutorial_auto_examples_combo_plot_combo_2_views_2_classes.py`
.. raw:: html .. raw:: html
...@@ -49,7 +49,7 @@ cooperation between views for classification. ...@@ -49,7 +49,7 @@ cooperation between views for classification.
.. toctree:: .. toctree::
:hidden: :hidden:
/tutorial/auto_examples/cumbo/plot_cumbo_2_views_2_classes /tutorial/auto_examples/combo/plot_combo_2_views_2_classes
.. raw:: html .. raw:: html
...@@ -57,9 +57,9 @@ cooperation between views for classification. ...@@ -57,9 +57,9 @@ cooperation between views for classification.
.. only:: html .. only:: html
.. figure:: /tutorial/auto_examples/cumbo/images/thumb/sphx_glr_plot_cumbo_3_views_3_classes_thumb.png .. figure:: /tutorial/auto_examples/combo/images/thumb/sphx_glr_plot_combo_3_views_3_classes_thumb.png
:ref:`sphx_glr_tutorial_auto_examples_cumbo_plot_cumbo_3_views_3_classes.py` :ref:`sphx_glr_tutorial_auto_examples_combo_plot_combo_3_views_3_classes.py`
.. raw:: html .. raw:: html
...@@ -69,7 +69,7 @@ cooperation between views for classification. ...@@ -69,7 +69,7 @@ cooperation between views for classification.
.. toctree:: .. toctree::
:hidden: :hidden:
/tutorial/auto_examples/cumbo/plot_cumbo_3_views_3_classes /tutorial/auto_examples/combo/plot_combo_3_views_3_classes
.. raw:: html .. raw:: html
<div class="sphx-glr-clear"></div> <div class="sphx-glr-clear"></div>
...@@ -242,13 +242,13 @@ The following toy examples illustrate how the multimodal as usecase on digit da ...@@ -242,13 +242,13 @@ The following toy examples illustrate how the multimodal as usecase on digit da
.. raw:: html .. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="multi class digit from sklearn, multivue - vue 0 digit data (color of sklearn) - vue 1 gradia..."> <div class="sphx-glr-thumbcontainer" tooltip="Use Case MKL on digit">
.. only:: html .. only:: html
.. figure:: /tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMuCuBo_thumb.png .. figure:: /tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMKL_thumb.png
:ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMuCuBo.py` :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py`
.. raw:: html .. raw:: html
...@@ -258,17 +258,17 @@ The following toy examples illustrate how the multimodal as usecase on digit da ...@@ -258,17 +258,17 @@ The following toy examples illustrate how the multimodal as usecase on digit da
.. toctree:: .. toctree::
:hidden: :hidden:
/tutorial/auto_examples/usecase/plot_usecase_exampleMuCuBo /tutorial/auto_examples/usecase/plot_usecase_exampleMKL
.. raw:: html .. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="Use Case MKL on digit"> <div class="sphx-glr-thumbcontainer" tooltip="multi class digit from sklearn, multivue - vue 0 digit data (color of sklearn) - vue 1 gradia...">
.. only:: html .. only:: html
.. figure:: /tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMKL_thumb.png .. figure:: /tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMuComBo_thumb.png
:ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py` :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMuComBo.py`
.. raw:: html .. raw:: html
...@@ -278,7 +278,7 @@ The following toy examples illustrate how the multimodal as usecase on digit da ...@@ -278,7 +278,7 @@ The following toy examples illustrate how the multimodal as usecase on digit da
.. toctree:: .. toctree::
:hidden: :hidden:
/tutorial/auto_examples/usecase/plot_usecase_exampleMKL /tutorial/auto_examples/usecase/plot_usecase_exampleMuComBo
.. raw:: html .. raw:: html
<div class="sphx-glr-clear"></div> <div class="sphx-glr-clear"></div>
...@@ -291,15 +291,15 @@ The following toy examples illustrate how the multimodal as usecase on digit da ...@@ -291,15 +291,15 @@ The following toy examples illustrate how the multimodal as usecase on digit da
:class: sphx-glr-footer-gallery :class: sphx-glr-footer-gallery
.. container:: sphx-glr-download .. container:: sphx-glr-download sphx-glr-download-python
:download:`Download all examples in Python source code: auto_examples_python.zip <//home/dominique/projets/ANR-Lives/scikit-multimodallearn/doc/tutorial/auto_examples/auto_examples_python.zip>` :download:`Download all examples in Python source code: auto_examples_python.zip <//home/baptiste/Documents/Gitwork/scikit-multimodallearn/doc/tutorial/auto_examples/auto_examples_python.zip>`
.. container:: sphx-glr-download .. container:: sphx-glr-download sphx-glr-download-jupyter
:download:`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip <//home/dominique/projets/ANR-Lives/scikit-multimodallearn/doc/tutorial/auto_examples/auto_examples_jupyter.zip>` :download:`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip <//home/baptiste/Documents/Gitwork/scikit-multimodallearn/doc/tutorial/auto_examples/auto_examples_jupyter.zip>`
.. only:: html .. only:: html
......
...@@ -3,8 +3,8 @@ ...@@ -3,8 +3,8 @@
.. _sphx_glr_tutorial_auto_examples_mumbo_sg_execution_times: .. _sphx_glr_tutorial_auto_examples_mumbo_sg_execution_times:
Computation times Mumbo computation times
================= =======================
**00:02.013** total execution time for **tutorial_auto_examples_mumbo** files: **00:02.013** total execution time for **tutorial_auto_examples_mumbo** files:
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------------------------------------------------+-----------+--------+
......
...@@ -3,8 +3,8 @@ ...@@ -3,8 +3,8 @@
.. _sphx_glr_tutorial_auto_examples_mvml_sg_execution_times: .. _sphx_glr_tutorial_auto_examples_mvml_sg_execution_times:
Computation times MVML computation times
================= ======================
**00:03.630** total execution time for **tutorial_auto_examples_mvml** files: **00:03.630** total execution time for **tutorial_auto_examples_mvml** files:
+-------------------------------------------------------------------------------+-----------+--------+ +-------------------------------------------------------------------------------+-----------+--------+
......
doc/tutorial/auto_examples/usecase/images/sphx_glr_plot_usecase_exampleMKL_001.png

80.3 KiB | W: | H:

doc/tutorial/auto_examples/usecase/images/sphx_glr_plot_usecase_exampleMKL_001.png

76.8 KiB | W: | H:

doc/tutorial/auto_examples/usecase/images/sphx_glr_plot_usecase_exampleMKL_001.png
doc/tutorial/auto_examples/usecase/images/sphx_glr_plot_usecase_exampleMKL_001.png
doc/tutorial/auto_examples/usecase/images/sphx_glr_plot_usecase_exampleMKL_001.png
doc/tutorial/auto_examples/usecase/images/sphx_glr_plot_usecase_exampleMKL_001.png
  • 2-up
  • Swipe
  • Onion skin
doc/tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMKL_thumb.png

24.5 KiB | W: | H:

doc/tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMKL_thumb.png

24.3 KiB | W: | H:

doc/tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMKL_thumb.png
doc/tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMKL_thumb.png
doc/tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMKL_thumb.png
doc/tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMKL_thumb.png
  • 2-up
  • Swipe
  • Onion skin
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
%matplotlib inline %matplotlib inline
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
\n# Use Case MKL on digit\n\nUse case for all classifier of multimodallearn MKL\nmulti class digit from sklearn, multivue\n - vue 0 digit data (color of sklearn)\n - vue 1 gradiant of image in first direction\n - vue 2 gradiant of image in second direction \n# Use Case MKL on digit\n\nUse case for all classifier of multimodallearn MKL\nmulti class digit from sklearn, multivue\n - vue 0 digit data (color of sklearn)\n - vue 1 gradiant of image in first direction\n - vue 2 gradiant of image in second direction
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.multiclass import OneVsOneClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeClassifier\nfrom multimodal.datasets.base import load_dict, save_dict\nfrom multimodal.tests.data.get_dataset_path import get_dataset_path\nfrom multimodal.datasets.data_sample import MultiModalArray\nfrom multimodal.kernels.mvml import MVML\nfrom multimodal.kernels.lpMKL import MKL\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib._color_data as mcd\n\n\ndef plot_subplot(X, Y, Y_pred, vue, subplot, title):\n cn = mcd.CSS4_COLORS\n classes = np.unique(Y)\n n_classes = len(np.unique(Y))\n axs = plt.subplot(subplot[0],subplot[1],subplot[2])\n axs.set_title(title)\n #plt.scatter(X._extract_view(vue), X._extract_view(vue), s=40, c='gray',\n # edgecolors=(0, 0, 0))\n for index, k in zip(range(n_classes), cn.keys()):\n Y_class, = np.where(Y==classes[index])\n Y_class_pred = np.intersect1d(np.where(Y_pred==classes[index])[0], np.where(Y_pred==Y)[0])\n plt.scatter(X._extract_view(vue)[Y_class],\n X._extract_view(vue)[Y_class],\n s=40, c=cn[k], edgecolors='blue', linewidths=2, label="class real class: "+str(index)) #\n plt.scatter(X._extract_view(vue)[Y_class_pred],\n X._extract_view(vue)[Y_class_pred],\n s=160, edgecolors='orange', linewidths=2, label="class prediction: "+str(index))\n\n\nif __name__ == '__main__':\n # file = get_dataset_path("digit_histogram.npy")\n file = get_dataset_path("digit_col_grad.npy")\n y = np.load(get_dataset_path("digit_y.npy"))\n base_estimator = DecisionTreeClassifier(max_depth=4)\n dic_digit = load_dict(file)\n XX =MultiModalArray(dic_digit)\n X_train, X_test, y_train, y_test = train_test_split(XX, y)\n\n est4 = OneVsOneClassifier(MKL(lmbda=0.1, nystrom_param=0.2)).fit(X_train, y_train)\n y_pred4 = est4.predict(X_test)\n y_pred44 = est4.predict(X_train)\n print("result of MKL on digit with oneversone")\n result4 = np.mean(y_pred4.ravel() == y_test.ravel()) * 100\n print(result4)\n\n fig = plt.figure(figsize=(12., 11.))\n fig.suptitle("MKL : result" + str(result4), fontsize=16)\n plot_subplot(X_train, y_train, y_pred44 ,0, (4, 1, 1), "train vue 0 color" )\n plot_subplot(X_test, y_test,y_pred4 , 0, (4, 1, 2), "test vue 0 color" )\n plot_subplot(X_test, y_test, y_pred4,1, (4, 1, 3), "test vue 1 gradiant 0" )\n plot_subplot(X_test, y_test,y_pred4, 2, (4, 1, 4), "test vue 2 gradiant 1" )\n # plt.legend()\n plt.show() import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.multiclass import OneVsOneClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeClassifier\nfrom multimodal.datasets.base import load_dict, save_dict\nfrom multimodal.tests.data.get_dataset_path import get_dataset_path\nfrom multimodal.datasets.data_sample import MultiModalArray\nfrom multimodal.kernels.lpMKL import MKL\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib._color_data as mcd\n\n\ndef plot_subplot(X, Y, Y_pred, vue, subplot, title):\n cn = mcd.CSS4_COLORS\n classes = np.unique(Y)\n n_classes = len(np.unique(Y))\n axs = plt.subplot(subplot[0],subplot[1],subplot[2])\n axs.set_title(title)\n #plt.scatter(X._extract_view(vue), X._extract_view(vue), s=40, c='gray',\n # edgecolors=(0, 0, 0))\n for index, k in zip(range(n_classes), cn.keys()):\n Y_class, = np.where(Y==classes[index])\n Y_class_pred = np.intersect1d(np.where(Y_pred==classes[index])[0], np.where(Y_pred==Y)[0])\n plt.scatter(X._extract_view(vue)[Y_class],\n X._extract_view(vue)[Y_class],\n s=40, c=cn[k], edgecolors='blue', linewidths=2, label="class real class: "+str(index)) #\n plt.scatter(X._extract_view(vue)[Y_class_pred],\n X._extract_view(vue)[Y_class_pred],\n s=160, edgecolors='orange', linewidths=2, label="class prediction: "+str(index))\n\n\nif __name__ == '__main__':\n # file = get_dataset_path("digit_histogram.npy")\n file = get_dataset_path("digit_col_grad.npy")\n y = np.load(get_dataset_path("digit_y.npy"))\n dic_digit = load_dict(file)\n XX =MultiModalArray(dic_digit)\n X_train, X_test, y_train, y_test = train_test_split(XX, y)\n\n est4 = OneVsOneClassifier(MKL(lmbda=0.1, nystrom_param=0.2)).fit(X_train, y_train)\n y_pred4 = est4.predict(X_test)\n y_pred44 = est4.predict(X_train)\n print("result of MKL on digit with oneversone")\n result4 = np.mean(y_pred4.ravel() == y_test.ravel()) * 100\n print(result4)\n\n fig = plt.figure(figsize=(12., 11.))\n fig.suptitle("MKL : result" + str(result4), fontsize=16)\n plot_subplot(X_train, y_train, y_pred44 ,0, (4, 1, 1), "train vue 0 color" )\n plot_subplot(X_test, y_test,y_pred4 , 0, (4, 1, 2), "test vue 0 color" )\n plot_subplot(X_test, y_test, y_pred4,1, (4, 1, 3), "test vue 1 gradiant 0" )\n plot_subplot(X_test, y_test,y_pred4, 2, (4, 1, 4), "test vue 2 gradiant 1" )\n # plt.legend()\n plt.show()
``` ```
......
...@@ -19,7 +19,6 @@ from sklearn.tree import DecisionTreeClassifier ...@@ -19,7 +19,6 @@ from sklearn.tree import DecisionTreeClassifier
from multimodal.datasets.base import load_dict, save_dict from multimodal.datasets.base import load_dict, save_dict
from multimodal.tests.data.get_dataset_path import get_dataset_path from multimodal.tests.data.get_dataset_path import get_dataset_path
from multimodal.datasets.data_sample import MultiModalArray from multimodal.datasets.data_sample import MultiModalArray
from multimodal.kernels.mvml import MVML
from multimodal.kernels.lpMKL import MKL from multimodal.kernels.lpMKL import MKL
import numpy as np import numpy as np
...@@ -50,7 +49,6 @@ if __name__ == '__main__': ...@@ -50,7 +49,6 @@ if __name__ == '__main__':
# file = get_dataset_path("digit_histogram.npy") # file = get_dataset_path("digit_histogram.npy")
file = get_dataset_path("digit_col_grad.npy") file = get_dataset_path("digit_col_grad.npy")
y = np.load(get_dataset_path("digit_y.npy")) y = np.load(get_dataset_path("digit_y.npy"))
base_estimator = DecisionTreeClassifier(max_depth=4)
dic_digit = load_dict(file) dic_digit = load_dict(file)
XX =MultiModalArray(dic_digit) XX =MultiModalArray(dic_digit)
X_train, X_test, y_train, y_test = train_test_split(XX, y) X_train, X_test, y_train, y_test = train_test_split(XX, y)
......
f7b5c3f0fd24e4628f03aa7019eea376 3360d3ee5508f0e16023ee336767f17c
\ No newline at end of file \ No newline at end of file
.. only:: html
.. note:: .. note::
:class: sphx-glr-download-link-note :class: sphx-glr-download-link-note
...@@ -30,8 +32,8 @@ multi class digit from sklearn, multivue ...@@ -30,8 +32,8 @@ multi class digit from sklearn, multivue
.. code-block:: none .. code-block:: none
result of MKL on digit with oneversone result of MKL on digit with oneversone
96.88888888888889 97.77777777777777
/home/dominique/projets/ANR-Lives/scikit-multimodallearn/examples/usecase/plot_usecase_exampleMKL.py:72: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure. /home/baptiste/Documents/Gitwork/scikit-multimodallearn/examples/usecase/plot_usecase_exampleMKL.py:70: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
plt.show() plt.show()
...@@ -53,7 +55,6 @@ multi class digit from sklearn, multivue ...@@ -53,7 +55,6 @@ multi class digit from sklearn, multivue
from multimodal.datasets.base import load_dict, save_dict from multimodal.datasets.base import load_dict, save_dict
from multimodal.tests.data.get_dataset_path import get_dataset_path from multimodal.tests.data.get_dataset_path import get_dataset_path
from multimodal.datasets.data_sample import MultiModalArray from multimodal.datasets.data_sample import MultiModalArray
from multimodal.kernels.mvml import MVML
from multimodal.kernels.lpMKL import MKL from multimodal.kernels.lpMKL import MKL
import numpy as np import numpy as np
...@@ -84,7 +85,6 @@ multi class digit from sklearn, multivue ...@@ -84,7 +85,6 @@ multi class digit from sklearn, multivue
# file = get_dataset_path("digit_histogram.npy") # file = get_dataset_path("digit_histogram.npy")
file = get_dataset_path("digit_col_grad.npy") file = get_dataset_path("digit_col_grad.npy")
y = np.load(get_dataset_path("digit_y.npy")) y = np.load(get_dataset_path("digit_y.npy"))
base_estimator = DecisionTreeClassifier(max_depth=4)
dic_digit = load_dict(file) dic_digit = load_dict(file)
XX =MultiModalArray(dic_digit) XX =MultiModalArray(dic_digit)
X_train, X_test, y_train, y_test = train_test_split(XX, y) X_train, X_test, y_train, y_test = train_test_split(XX, y)
...@@ -109,7 +109,7 @@ multi class digit from sklearn, multivue ...@@ -109,7 +109,7 @@ multi class digit from sklearn, multivue
.. rst-class:: sphx-glr-timing .. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 20.457 seconds) **Total running time of the script:** ( 1 minutes 59.263 seconds)
.. _sphx_glr_download_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py: .. _sphx_glr_download_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py:
...@@ -122,13 +122,13 @@ multi class digit from sklearn, multivue ...@@ -122,13 +122,13 @@ multi class digit from sklearn, multivue
.. container:: sphx-glr-download .. container:: sphx-glr-download sphx-glr-download-python
:download:`Download Python source code: plot_usecase_exampleMKL.py <plot_usecase_exampleMKL.py>` :download:`Download Python source code: plot_usecase_exampleMKL.py <plot_usecase_exampleMKL.py>`
.. container:: sphx-glr-download .. container:: sphx-glr-download sphx-glr-download-jupyter
:download:`Download Jupyter notebook: plot_usecase_exampleMKL.ipynb <plot_usecase_exampleMKL.ipynb>` :download:`Download Jupyter notebook: plot_usecase_exampleMKL.ipynb <plot_usecase_exampleMKL.ipynb>`
......
No preview for this file type
...@@ -5,16 +5,16 @@ ...@@ -5,16 +5,16 @@
Computation times Computation times
================= =================
**01:55.487** total execution time for **tutorial_auto_examples_usecase** files: **02:26.402** total execution time for **tutorial_auto_examples_usecase** files:
+------------------------------------------------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMVML.py` (``plot_usecase_exampleMVML.py``) | 01:14.485 | 0.0 MB | | :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py` (``plot_usecase_exampleMKL.py``) | 01:59.263 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py` (``plot_usecase_exampleMKL.py``) | 00:20.457 | 0.0 MB | | :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMuComBo.py` (``plot_usecase_exampleMuComBo.py``) | 00:27.139 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMuCuBo.py` (``plot_usecase_exampleMuCuBo.py``) | 00:14.171 | 0.0 MB | | :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMVML.py` (``plot_usecase_exampleMVML.py``) | 00:00.000 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMumBo.py` (``plot_usecase_exampleMumBo.py``) | 00:06.374 | 0.0 MB | | :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMumBo.py` (``plot_usecase_exampleMumBo.py``) | 00:00.000 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_usecase_function.py` (``usecase_function.py``) | 00:00.000 | 0.0 MB | | :ref:`sphx_glr_tutorial_auto_examples_usecase_usecase_function.py` (``usecase_function.py``) | 00:00.000 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+ +--------------------------------------------------------------------------------------------------------------------+-----------+--------+
.. only:: html
.. note:: .. note::
:class: sphx-glr-download-link-note :class: sphx-glr-download-link-note
...@@ -60,13 +62,13 @@ Function plot_subplot ...@@ -60,13 +62,13 @@ Function plot_subplot
.. container:: sphx-glr-download .. container:: sphx-glr-download sphx-glr-download-python
:download:`Download Python source code: usecase_function.py <usecase_function.py>` :download:`Download Python source code: usecase_function.py <usecase_function.py>`
.. container:: sphx-glr-download .. container:: sphx-glr-download sphx-glr-download-jupyter
:download:`Download Jupyter notebook: usecase_function.ipynb <usecase_function.ipynb>` :download:`Download Jupyter notebook: usecase_function.ipynb <usecase_function.ipynb>`
......
No preview for this file type
.. _estim-template:
Estimator template
==================
To add a multimodal estimator based on the groundwork of scikit-multimodallearn,
please feel free to use the following template, while complying with the
`Developer's Guide <http://scikit-learn.org/stable/developers>`_ of the
scikit-learn project to ensure full compatibility.
.. code-block:: default
import numpy as np
from sklearn.base import ClassifierMixin, BaseEstimator
from sklearn.utils import check_X_y
from sklearn.utils.multiclass import check_classification_targets
from sklearn.utils.validation import check_is_fitted
from multimodal.boosting.boost import UBoosting
class NewMultiModalEstimator(BaseEstimator, ClassifierMixin, UBoosting):
r""""
Your documentation
"""
def __init__(self, your_attributes=None, ):
self.your_attributes = your_attributes
def fit(self, X, y, views_ind=None):
"""Build a multimodal classifier from the training set (X, y).
Parameters
----------
X : dict dictionary with all views
or
`MultiModalData` , `MultiModalArray`, `MultiModalSparseArray`
or
{array-like, sparse matrix}, shape = (n_samples, n_features)
Training multi-view input samples.
Sparse matrix can be CSC, CSR, COO, DOK, or LIL.
COO, DOK and LIL are converted to CSR.
y : array-like, shape = (n_samples,)
Target values (class labels).
views_ind : array-like (default=[0, n_features//2, n_features])
Paramater specifying how to extract the data views from X:
- If views_ind is a 1-D array of sorted integers, the entries
indicate the limits of the slices used to extract the views,
where view ``n`` is given by
``X[:, views_ind[n]:views_ind[n+1]]``.
With this convention each view is therefore a view (in the NumPy
sense) of X and no copy of the data is done.
- If views_ind is an array of arrays of integers, then each array
of integers ``views_ind[n]`` specifies the indices of the view
``n``, which is then given by ``X[:, views_ind[n]]``.
With this convention each view creates therefore a partial copy
of the data in X. This convention is thus more flexible but less
efficient than the previous one.
Returns
-------
self : object
Returns self.
Raises
------
ValueError estimator must support sample_weight
ValueError where `X` and `view_ind` are not compatibles
"""
# _global_X_transform processes the multimodal dataset to transform the
# in the MultiModalArray format.
self.X_ = self._global_X_transform(X, views_ind=views_ind)
# Ensure proper format for views_ind and return number of views.
views_ind_, n_views = self.X_._validate_views_ind(self.X_.views_ind,
self.X_.shape[1])
# According to scikit learn guidelines.
check_X_y(self.X_, y)
if not isinstance(y, np.ndarray):
y = np.asarray(y)
check_classification_targets(y)
self._validate_estimator()
return self
def predict(self, X):
"""Predict classes for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = (n_samples, n_features)
Multi-view input samples.
Sparse matrix can be CSC, CSR, COO, DOK, or LIL.
COO, DOK and LIL are converted to CSR.
Returns
-------
y : numpy.ndarray, shape = (n_samples,)
Predicted classes.
Raises
------
ValueError 'X' input matrix must be have the same total number of features
of 'X' fit data
"""
# According to scikit learn guidelines
check_is_fitted(self, ("your_attributes"))
# _global_X_transform processes the multimodal dataset to transform the
# in the MultiModalArray format.
X = self._global_X_transform(X, views_ind=self.X_.views_ind)
# Ensure that X is in the proper format.
X = self._validate_X_predict(X)
# Returning fake multi-class labels
return np.random.randint(0, 5, size=X.shape[0])
\ No newline at end of file
...@@ -38,7 +38,8 @@ The development of scikit-multimodallearn follows the guidelines provided by the ...@@ -38,7 +38,8 @@ The development of scikit-multimodallearn follows the guidelines provided by the
scikit-learn community. scikit-learn community.
Refer to the `Developer's Guide <http://scikit-learn.org/stable/developers>`_ Refer to the `Developer's Guide <http://scikit-learn.org/stable/developers>`_
of the scikit-learn project for more details. of the scikit-learn project for general details. Expanding the library can be
done by following the template provided in :ref:`estim-template` .
Source code Source code
----------- -----------
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
Computation times Computation times
================= =================
total execution time for **tutorial_auto_examples** files: Total execution time for **tutorial_auto_examples** files:
.. toctree:: .. toctree::
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment