Commit 122d71c1 authored by Baptiste Bauvin's avatar Baptiste Bauvin
Browse files

Revisions

parent c33aa00e
Pipeline #8599 failed with stages
in 2 minutes and 30 seconds
......@@ -21,10 +21,10 @@ Documentation
reference/api
tutorial/install_devel
tutorial/auto_examples/index
tutorial/times
reference/api
tutorial/credits
......
:orphan:
.. _sphx_glr_tutorial_auto_examples_cumbo_sg_execution_times:
.. _sphx_glr_tutorial_auto_examples_combo_sg_execution_times:
Computation times
=================
**00:01.102** total execution time for **tutorial_auto_examples_cumbo** files:
**00:03.474** total execution time for **tutorial_auto_examples_combo** files:
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_cumbo_plot_cumbo_2_views_2_classes.py` (``plot_cumbo_2_views_2_classes.py``) | 00:00.603 | 0.0 MB |
| :ref:`sphx_glr_tutorial_auto_examples_combo_plot_combo_2_views_2_classes.py` (``plot_combo_2_views_2_classes.py``) | 00:02.387 | 0.0 MB |
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_cumbo_plot_cumbo_3_views_3_classes.py` (``plot_cumbo_3_views_3_classes.py``) | 00:00.499 | 0.0 MB |
| :ref:`sphx_glr_tutorial_auto_examples_combo_plot_combo_3_views_3_classes.py` (``plot_combo_3_views_3_classes.py``) | 00:01.088 | 0.0 MB |
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+
......@@ -19,7 +19,7 @@ Multimodal Examples
.. _sphx_glr_tutorial_auto_examples_cumbo:
.. _sphx_glr_tutorial_auto_examples_combo:
MuCuMBo Examples
......@@ -37,9 +37,9 @@ cooperation between views for classification.
.. only:: html
.. figure:: /tutorial/auto_examples/cumbo/images/thumb/sphx_glr_plot_cumbo_2_views_2_classes_thumb.png
.. figure:: /tutorial/auto_examples/combo/images/thumb/sphx_glr_plot_combo_2_views_2_classes_thumb.png
:ref:`sphx_glr_tutorial_auto_examples_cumbo_plot_cumbo_2_views_2_classes.py`
:ref:`sphx_glr_tutorial_auto_examples_combo_plot_combo_2_views_2_classes.py`
.. raw:: html
......@@ -49,7 +49,7 @@ cooperation between views for classification.
.. toctree::
:hidden:
/tutorial/auto_examples/cumbo/plot_cumbo_2_views_2_classes
/tutorial/auto_examples/combo/plot_combo_2_views_2_classes
.. raw:: html
......@@ -57,9 +57,9 @@ cooperation between views for classification.
.. only:: html
.. figure:: /tutorial/auto_examples/cumbo/images/thumb/sphx_glr_plot_cumbo_3_views_3_classes_thumb.png
.. figure:: /tutorial/auto_examples/combo/images/thumb/sphx_glr_plot_combo_3_views_3_classes_thumb.png
:ref:`sphx_glr_tutorial_auto_examples_cumbo_plot_cumbo_3_views_3_classes.py`
:ref:`sphx_glr_tutorial_auto_examples_combo_plot_combo_3_views_3_classes.py`
.. raw:: html
......@@ -69,7 +69,7 @@ cooperation between views for classification.
.. toctree::
:hidden:
/tutorial/auto_examples/cumbo/plot_cumbo_3_views_3_classes
/tutorial/auto_examples/combo/plot_combo_3_views_3_classes
.. raw:: html
<div class="sphx-glr-clear"></div>
......@@ -242,13 +242,13 @@ The following toy examples illustrate how the multimodal as usecase on digit da
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="multi class digit from sklearn, multivue - vue 0 digit data (color of sklearn) - vue 1 gradia...">
<div class="sphx-glr-thumbcontainer" tooltip="Use Case MKL on digit">
.. only:: html
.. figure:: /tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMuCuBo_thumb.png
.. figure:: /tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMKL_thumb.png
:ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMuCuBo.py`
:ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py`
.. raw:: html
......@@ -258,17 +258,17 @@ The following toy examples illustrate how the multimodal as usecase on digit da
.. toctree::
:hidden:
/tutorial/auto_examples/usecase/plot_usecase_exampleMuCuBo
/tutorial/auto_examples/usecase/plot_usecase_exampleMKL
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="Use Case MKL on digit">
<div class="sphx-glr-thumbcontainer" tooltip="multi class digit from sklearn, multivue - vue 0 digit data (color of sklearn) - vue 1 gradia...">
.. only:: html
.. figure:: /tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMKL_thumb.png
.. figure:: /tutorial/auto_examples/usecase/images/thumb/sphx_glr_plot_usecase_exampleMuComBo_thumb.png
:ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py`
:ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMuComBo.py`
.. raw:: html
......@@ -278,7 +278,7 @@ The following toy examples illustrate how the multimodal as usecase on digit da
.. toctree::
:hidden:
/tutorial/auto_examples/usecase/plot_usecase_exampleMKL
/tutorial/auto_examples/usecase/plot_usecase_exampleMuComBo
.. raw:: html
<div class="sphx-glr-clear"></div>
......@@ -291,15 +291,15 @@ The following toy examples illustrate how the multimodal as usecase on digit da
:class: sphx-glr-footer-gallery
.. container:: sphx-glr-download
.. container:: sphx-glr-download sphx-glr-download-python
:download:`Download all examples in Python source code: auto_examples_python.zip <//home/dominique/projets/ANR-Lives/scikit-multimodallearn/doc/tutorial/auto_examples/auto_examples_python.zip>`
:download:`Download all examples in Python source code: auto_examples_python.zip <//home/baptiste/Documents/Gitwork/scikit-multimodallearn/doc/tutorial/auto_examples/auto_examples_python.zip>`
.. container:: sphx-glr-download
.. container:: sphx-glr-download sphx-glr-download-jupyter
:download:`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip <//home/dominique/projets/ANR-Lives/scikit-multimodallearn/doc/tutorial/auto_examples/auto_examples_jupyter.zip>`
:download:`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip <//home/baptiste/Documents/Gitwork/scikit-multimodallearn/doc/tutorial/auto_examples/auto_examples_jupyter.zip>`
.. only:: html
......
......@@ -3,8 +3,8 @@
.. _sphx_glr_tutorial_auto_examples_mumbo_sg_execution_times:
Computation times
=================
Mumbo computation times
=======================
**00:02.013** total execution time for **tutorial_auto_examples_mumbo** files:
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+
......
......@@ -3,8 +3,8 @@
.. _sphx_glr_tutorial_auto_examples_mvml_sg_execution_times:
Computation times
=================
MVML computation times
======================
**00:03.630** total execution time for **tutorial_auto_examples_mvml** files:
+-------------------------------------------------------------------------------+-----------+--------+
......
%% Cell type:code id: tags:
``` python
%matplotlib inline
```
%% Cell type:markdown id: tags:
# Use Case MKL on digit
Use case for all classifier of multimodallearn MKL
multi class digit from sklearn, multivue
- vue 0 digit data (color of sklearn)
- vue 1 gradiant of image in first direction
- vue 2 gradiant of image in second direction
%% Cell type:code id: tags:
``` python
import numpy as np
import matplotlib.pyplot as plt
from sklearn.multiclass import OneVsOneClassifier
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from multimodal.datasets.base import load_dict, save_dict
from multimodal.tests.data.get_dataset_path import get_dataset_path
from multimodal.datasets.data_sample import MultiModalArray
from multimodal.kernels.mvml import MVML
from multimodal.kernels.lpMKL import MKL
import numpy as np
import matplotlib.pyplot as plt
import matplotlib._color_data as mcd
def plot_subplot(X, Y, Y_pred, vue, subplot, title):
cn = mcd.CSS4_COLORS
classes = np.unique(Y)
n_classes = len(np.unique(Y))
axs = plt.subplot(subplot[0],subplot[1],subplot[2])
axs.set_title(title)
#plt.scatter(X._extract_view(vue), X._extract_view(vue), s=40, c='gray',
# edgecolors=(0, 0, 0))
for index, k in zip(range(n_classes), cn.keys()):
Y_class, = np.where(Y==classes[index])
Y_class_pred = np.intersect1d(np.where(Y_pred==classes[index])[0], np.where(Y_pred==Y)[0])
plt.scatter(X._extract_view(vue)[Y_class],
X._extract_view(vue)[Y_class],
s=40, c=cn[k], edgecolors='blue', linewidths=2, label="class real class: "+str(index)) #
plt.scatter(X._extract_view(vue)[Y_class_pred],
X._extract_view(vue)[Y_class_pred],
s=160, edgecolors='orange', linewidths=2, label="class prediction: "+str(index))
if __name__ == '__main__':
# file = get_dataset_path("digit_histogram.npy")
file = get_dataset_path("digit_col_grad.npy")
y = np.load(get_dataset_path("digit_y.npy"))
base_estimator = DecisionTreeClassifier(max_depth=4)
dic_digit = load_dict(file)
XX =MultiModalArray(dic_digit)
X_train, X_test, y_train, y_test = train_test_split(XX, y)
est4 = OneVsOneClassifier(MKL(lmbda=0.1, nystrom_param=0.2)).fit(X_train, y_train)
y_pred4 = est4.predict(X_test)
y_pred44 = est4.predict(X_train)
print("result of MKL on digit with oneversone")
result4 = np.mean(y_pred4.ravel() == y_test.ravel()) * 100
print(result4)
fig = plt.figure(figsize=(12., 11.))
fig.suptitle("MKL : result" + str(result4), fontsize=16)
plot_subplot(X_train, y_train, y_pred44 ,0, (4, 1, 1), "train vue 0 color" )
plot_subplot(X_test, y_test,y_pred4 , 0, (4, 1, 2), "test vue 0 color" )
plot_subplot(X_test, y_test, y_pred4,1, (4, 1, 3), "test vue 1 gradiant 0" )
plot_subplot(X_test, y_test,y_pred4, 2, (4, 1, 4), "test vue 2 gradiant 1" )
# plt.legend()
plt.show()
```
......@@ -19,7 +19,6 @@ from sklearn.tree import DecisionTreeClassifier
from multimodal.datasets.base import load_dict, save_dict
from multimodal.tests.data.get_dataset_path import get_dataset_path
from multimodal.datasets.data_sample import MultiModalArray
from multimodal.kernels.mvml import MVML
from multimodal.kernels.lpMKL import MKL
import numpy as np
......@@ -50,7 +49,6 @@ if __name__ == '__main__':
# file = get_dataset_path("digit_histogram.npy")
file = get_dataset_path("digit_col_grad.npy")
y = np.load(get_dataset_path("digit_y.npy"))
base_estimator = DecisionTreeClassifier(max_depth=4)
dic_digit = load_dict(file)
XX =MultiModalArray(dic_digit)
X_train, X_test, y_train, y_test = train_test_split(XX, y)
......
f7b5c3f0fd24e4628f03aa7019eea376
\ No newline at end of file
3360d3ee5508f0e16023ee336767f17c
\ No newline at end of file
.. note::
:class: sphx-glr-download-link-note
.. only:: html
.. note::
:class: sphx-glr-download-link-note
Click :ref:`here <sphx_glr_download_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py>` to download the full example code
.. rst-class:: sphx-glr-example-title
Click :ref:`here <sphx_glr_download_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py>` to download the full example code
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py:
.. _sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py:
=====================
......@@ -30,8 +32,8 @@ multi class digit from sklearn, multivue
.. code-block:: none
result of MKL on digit with oneversone
96.88888888888889
/home/dominique/projets/ANR-Lives/scikit-multimodallearn/examples/usecase/plot_usecase_exampleMKL.py:72: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
97.77777777777777
/home/baptiste/Documents/Gitwork/scikit-multimodallearn/examples/usecase/plot_usecase_exampleMKL.py:70: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
plt.show()
......@@ -53,7 +55,6 @@ multi class digit from sklearn, multivue
from multimodal.datasets.base import load_dict, save_dict
from multimodal.tests.data.get_dataset_path import get_dataset_path
from multimodal.datasets.data_sample import MultiModalArray
from multimodal.kernels.mvml import MVML
from multimodal.kernels.lpMKL import MKL
import numpy as np
......@@ -84,7 +85,6 @@ multi class digit from sklearn, multivue
# file = get_dataset_path("digit_histogram.npy")
file = get_dataset_path("digit_col_grad.npy")
y = np.load(get_dataset_path("digit_y.npy"))
base_estimator = DecisionTreeClassifier(max_depth=4)
dic_digit = load_dict(file)
XX =MultiModalArray(dic_digit)
X_train, X_test, y_train, y_test = train_test_split(XX, y)
......@@ -109,7 +109,7 @@ multi class digit from sklearn, multivue
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 20.457 seconds)
**Total running time of the script:** ( 1 minutes 59.263 seconds)
.. _sphx_glr_download_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py:
......@@ -122,13 +122,13 @@ multi class digit from sklearn, multivue
.. container:: sphx-glr-download
.. container:: sphx-glr-download sphx-glr-download-python
:download:`Download Python source code: plot_usecase_exampleMKL.py <plot_usecase_exampleMKL.py>`
.. container:: sphx-glr-download
.. container:: sphx-glr-download sphx-glr-download-jupyter
:download:`Download Jupyter notebook: plot_usecase_exampleMKL.ipynb <plot_usecase_exampleMKL.ipynb>`
......
......@@ -5,16 +5,16 @@
Computation times
=================
**01:55.487** total execution time for **tutorial_auto_examples_usecase** files:
**02:26.402** total execution time for **tutorial_auto_examples_usecase** files:
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMVML.py` (``plot_usecase_exampleMVML.py``) | 01:14.485 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py` (``plot_usecase_exampleMKL.py``) | 00:20.457 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMuCuBo.py` (``plot_usecase_exampleMuCuBo.py``) | 00:14.171 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMumBo.py` (``plot_usecase_exampleMumBo.py``) | 00:06.374 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_usecase_function.py` (``usecase_function.py``) | 00:00.000 | 0.0 MB |
+------------------------------------------------------------------------------------------------------------------+-----------+--------+
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMKL.py` (``plot_usecase_exampleMKL.py``) | 01:59.263 | 0.0 MB |
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMuComBo.py` (``plot_usecase_exampleMuComBo.py``) | 00:27.139 | 0.0 MB |
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMVML.py` (``plot_usecase_exampleMVML.py``) | 00:00.000 | 0.0 MB |
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_plot_usecase_exampleMumBo.py` (``plot_usecase_exampleMumBo.py``) | 00:00.000 | 0.0 MB |
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorial_auto_examples_usecase_usecase_function.py` (``usecase_function.py``) | 00:00.000 | 0.0 MB |
+--------------------------------------------------------------------------------------------------------------------+-----------+--------+
.. note::
:class: sphx-glr-download-link-note
.. only:: html
.. note::
:class: sphx-glr-download-link-note
Click :ref:`here <sphx_glr_download_tutorial_auto_examples_usecase_usecase_function.py>` to download the full example code
.. rst-class:: sphx-glr-example-title
Click :ref:`here <sphx_glr_download_tutorial_auto_examples_usecase_usecase_function.py>` to download the full example code
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_tutorial_auto_examples_usecase_usecase_function.py:
.. _sphx_glr_tutorial_auto_examples_usecase_usecase_function.py:
========================
......@@ -60,13 +62,13 @@ Function plot_subplot
.. container:: sphx-glr-download
.. container:: sphx-glr-download sphx-glr-download-python
:download:`Download Python source code: usecase_function.py <usecase_function.py>`
.. container:: sphx-glr-download
.. container:: sphx-glr-download sphx-glr-download-jupyter
:download:`Download Jupyter notebook: usecase_function.ipynb <usecase_function.ipynb>`
......
.. _estim-template:
Estimator template
==================
To add a multimodal estimator based on the groundwork of scikit-multimodallearn,
please feel free to use the following template, while complying with the
`Developer's Guide <http://scikit-learn.org/stable/developers>`_ of the
scikit-learn project to ensure full compatibility.
.. code-block:: default
import numpy as np
from sklearn.base import ClassifierMixin, BaseEstimator
from sklearn.utils import check_X_y
from sklearn.utils.multiclass import check_classification_targets
from sklearn.utils.validation import check_is_fitted
from multimodal.boosting.boost import UBoosting
class NewMultiModalEstimator(BaseEstimator, ClassifierMixin, UBoosting):
r""""
Your documentation
"""
def __init__(self, your_attributes=None, ):
self.your_attributes = your_attributes
def fit(self, X, y, views_ind=None):
"""Build a multimodal classifier from the training set (X, y).
Parameters
----------
X : dict dictionary with all views
or
`MultiModalData` , `MultiModalArray`, `MultiModalSparseArray`
or
{array-like, sparse matrix}, shape = (n_samples, n_features)
Training multi-view input samples.
Sparse matrix can be CSC, CSR, COO, DOK, or LIL.
COO, DOK and LIL are converted to CSR.
y : array-like, shape = (n_samples,)
Target values (class labels).
views_ind : array-like (default=[0, n_features//2, n_features])
Paramater specifying how to extract the data views from X:
- If views_ind is a 1-D array of sorted integers, the entries
indicate the limits of the slices used to extract the views,
where view ``n`` is given by
``X[:, views_ind[n]:views_ind[n+1]]``.
With this convention each view is therefore a view (in the NumPy
sense) of X and no copy of the data is done.
- If views_ind is an array of arrays of integers, then each array
of integers ``views_ind[n]`` specifies the indices of the view
``n``, which is then given by ``X[:, views_ind[n]]``.
With this convention each view creates therefore a partial copy
of the data in X. This convention is thus more flexible but less
efficient than the previous one.
Returns
-------
self : object
Returns self.
Raises
------
ValueError estimator must support sample_weight
ValueError where `X` and `view_ind` are not compatibles
"""
# _global_X_transform processes the multimodal dataset to transform the
# in the MultiModalArray format.
self.X_ = self._global_X_transform(X, views_ind=views_ind)
# Ensure proper format for views_ind and return number of views.
views_ind_, n_views = self.X_._validate_views_ind(self.X_.views_ind,
self.X_.shape[1])
# According to scikit learn guidelines.
check_X_y(self.X_, y)
if not isinstance(y, np.ndarray):
y = np.asarray(y)
check_classification_targets(y)
self._validate_estimator()
return self
def predict(self, X):
"""Predict classes for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = (n_samples, n_features)
Multi-view input samples.
Sparse matrix can be CSC, CSR, COO, DOK, or LIL.
COO, DOK and LIL are converted to CSR.
Returns
-------
y : numpy.ndarray, shape = (n_samples,)
Predicted classes.
Raises
------
ValueError 'X' input matrix must be have the same total number of features
of 'X' fit data
"""
# According to scikit learn guidelines
check_is_fitted(self, ("your_attributes"))
# _global_X_transform processes the multimodal dataset to transform the
# in the MultiModalArray format.
X = self._global_X_transform(X, views_ind=self.X_.views_ind)
# Ensure that X is in the proper format.
X = self._validate_X_predict(X)
# Returning fake multi-class labels
return np.random.randint(0, 5, size=X.shape[0])
\ No newline at end of file
......@@ -38,7 +38,8 @@ The development of scikit-multimodallearn follows the guidelines provided by the
scikit-learn community.
Refer to the `Developer's Guide <http://scikit-learn.org/stable/developers>`_
of the scikit-learn project for more details.
of the scikit-learn project for general details. Expanding the library can be
done by following the template provided in :ref:`estim-template` .
Source code
-----------
......
......@@ -3,7 +3,7 @@
Computation times
=================
total execution time for **tutorial_auto_examples** files:
Total execution time for **tutorial_auto_examples** files:
.. toctree::
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment