Skip to content
Snippets Groups Projects
Commit ddeba9f2 authored by Dominique Benielli's avatar Dominique Benielli
Browse files

doc fix

parent cada5d4e
No related branches found
No related tags found
No related merge requests found
Pipeline #3930 passed
Showing
with 4 additions and 1396 deletions
...@@ -24,8 +24,8 @@ Documentation ...@@ -24,8 +24,8 @@ Documentation
reference/api reference/api
tutorial/install_devel tutorial/install_devel
tutorial/auto_examples/index tutorial/auto_examples/index
tutorial/auto_examples/sg_execution_times tutorial/times
tutorial/credits
Indices and tables Indices and tables
......
No preview for this file type
No preview for this file type
%% Cell type:code id: tags:
``` python
%matplotlib inline
```
%% Cell type:markdown id: tags:
\n==================================\nMuCombo 2 views, 2 classes example\n==================================\n\nIn this toy example, we generate data from two classes, split between two\ntwo-dimensional views.\n\nFor each view, the data are generated so that half of the points of each class\nare well separated in the plane, while the other half of the points are not\nseparated and placed in the same area. We also insure that the points that are\nnot separated in one view are well separated in the other view.\n\nThus, in the figure representing the data, the points represented by crosses\n(x) are well separated in view 0 while they are not separated in view 1, while\nthe points represented by dots (.) are well separated in view 1 while they are\nnot separated in view 0. In this figure, the blue symbols represent points\nof class 0, while red symbols represent points of class 1.\n\nThe MuCuMBo algorithm take adavantage of the complementarity of the two views to\nrightly classify the points.
%% Cell type:code id: tags:
``` python
import numpy as np\nfrom multimodal.boosting.cumbo import MuCumboClassifier\nfrom matplotlib import pyplot as plt\n\n\ndef generate_data(n_samples, lim):\n """Generate random data in a rectangle"""\n lim = np.array(lim)\n n_features = lim.shape[0]\n data = np.random.random((n_samples, n_features))\n data = (lim[:, 1]-lim[:, 0]) * data + lim[:, 0]\n return data\n\n\nseed = 12\nnp.random.seed(seed)\n\nn_samples = 100\n\nview_0 = np.concatenate((generate_data(n_samples, [[0., 1.], [0., 1.]]),\n generate_data(n_samples, [[1., 2.], [0., 1.]]),\n generate_data(n_samples, [[0., 1.], [0., 1.]]),\n generate_data(n_samples, [[0., 1.], [1., 2.]])))\n\nview_1 = np.concatenate((generate_data(n_samples, [[1., 2.], [0., 1.]]),\n generate_data(n_samples, [[0., 1.], [0., 1.]]),\n generate_data(n_samples, [[0., 1.], [1., 2.]]),\n generate_data(n_samples, [[0., 1.], [0., 1.]])))\n\nX = np.concatenate((view_0, view_1), axis=1)\n\ny = np.zeros(4*n_samples, dtype=np.int64)\ny[2*n_samples:] = 1\n\nviews_ind = np.array([0, 2, 4])\n\nn_estimators = 3\nclf = MuCumboClassifier(n_estimators=n_estimators)\nclf.fit(X, y, views_ind)\n\nprint('\nAfter 3 iterations, the MuCuMBo classifier reaches exact '\n 'classification for the\nlearning samples:')\nfor ind, score in enumerate(clf.staged_score(X, y)):\n print(' - iteration {}, score: {}'.format(ind + 1, score))\n\n\nprint('\nThe resulting MuCuMBo classifier uses three sub-classifiers that are '\n 'wheighted\nusing the following weights:\n'\n ' estimator weights: {}'.format(clf.estimator_weights_alpha_))\n\n# print('\nThe two first sub-classifiers use the data of view 0 to compute '\n# 'their\nclassification results, while the third one uses the data of '\n# 'view 1:\n'\n# ' best views: {}'. format(clf.best_views_))\n\nprint('\nThe first figure displays the data, splitting the representation '\n 'between the\ntwo views.')\n\nfig = plt.figure(figsize=(10., 8.))\nfig.suptitle('Representation of the data', size=16)\nfor ind_view in range(2):\n ax = plt.subplot(2, 1, ind_view + 1)\n ax.set_title('View {}'.format(ind_view))\n ind_feature = ind_view * 2\n styles = ('.b', 'xb', '.r', 'xr')\n labels = ('non-separated', 'separated')\n for ind in range(4):\n ind_class = ind // 2\n label = labels[(ind + ind_view) % 2]\n ax.plot(X[n_samples*ind:n_samples*(ind+1), ind_feature],\n X[n_samples*ind:n_samples*(ind+1), ind_feature + 1],\n styles[ind],\n label='Class {} ({})'.format(ind_class, label))\n ax.legend()\n\nprint('\nThe second figure displays the classification results for the '\n 'sub-classifiers\non the learning sample data.\n')\n\nstyles = ('.b', '.r')\n# fig = plt.figure(figsize=(12., 7.))\n# fig.suptitle('Classification results on the learning data for the '\n# 'sub-classifiers', size=16)\n# for ind_estimator in range(n_estimators):\n# best_view = clf.best_views_[ind_estimator]\n# y_pred = clf.estimators_[ind_estimator].predict(\n# X[:, 2*best_view:2*best_view+2])\n# background_color = (1.0, 1.0, 0.9)\n# for ind_view in range(2):\n# ax = plt.subplot(2, 3, ind_estimator + 3*ind_view + 1)\n# if ind_view == best_view:\n# ax.set_facecolor(background_color)\n# ax.set_title(\n# 'Sub-classifier {} - View {}'.format(ind_estimator, ind_view))\n# ind_feature = ind_view * 2\n# for ind_class in range(2):\n# ind_samples = (y_pred == ind_class)\n# ax.plot(X[ind_samples, ind_feature],\n# X[ind_samples, ind_feature + 1],\n# styles[ind_class],\n# label='Class {}'.format(ind_class))\n# ax.legend(title='Predicted class:')\n\nplt.show()
```
# -*- coding: utf-8 -*-
"""
==================================
MuCombo 2 views, 2 classes example
==================================
In this toy example, we generate data from two classes, split between two
two-dimensional views.
For each view, the data are generated so that half of the points of each class
are well separated in the plane, while the other half of the points are not
separated and placed in the same area. We also insure that the points that are
not separated in one view are well separated in the other view.
Thus, in the figure representing the data, the points represented by crosses
(x) are well separated in view 0 while they are not separated in view 1, while
the points represented by dots (.) are well separated in view 1 while they are
not separated in view 0. In this figure, the blue symbols represent points
of class 0, while red symbols represent points of class 1.
The MuCuMBo algorithm take adavantage of the complementarity of the two views to
rightly classify the points.
"""
import numpy as np
from multimodal.boosting.cumbo import MuCumboClassifier
from matplotlib import pyplot as plt
def generate_data(n_samples, lim):
"""Generate random data in a rectangle"""
lim = np.array(lim)
n_features = lim.shape[0]
data = np.random.random((n_samples, n_features))
data = (lim[:, 1]-lim[:, 0]) * data + lim[:, 0]
return data
seed = 12
np.random.seed(seed)
n_samples = 100
view_0 = np.concatenate((generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [1., 2.]])))
view_1 = np.concatenate((generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [1., 2.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]])))
X = np.concatenate((view_0, view_1), axis=1)
y = np.zeros(4*n_samples, dtype=np.int64)
y[2*n_samples:] = 1
views_ind = np.array([0, 2, 4])
n_estimators = 3
clf = MuCumboClassifier(n_estimators=n_estimators)
clf.fit(X, y, views_ind)
print('\nAfter 3 iterations, the MuCuMBo classifier reaches exact '
'classification for the\nlearning samples:')
for ind, score in enumerate(clf.staged_score(X, y)):
print(' - iteration {}, score: {}'.format(ind + 1, score))
print('\nThe resulting MuCuMBo classifier uses three sub-classifiers that are '
'wheighted\nusing the following weights:\n'
' estimator weights: {}'.format(clf.estimator_weights_alpha_))
# print('\nThe two first sub-classifiers use the data of view 0 to compute '
# 'their\nclassification results, while the third one uses the data of '
# 'view 1:\n'
# ' best views: {}'. format(clf.best_views_))
print('\nThe first figure displays the data, splitting the representation '
'between the\ntwo views.')
fig = plt.figure(figsize=(10., 8.))
fig.suptitle('Representation of the data', size=16)
for ind_view in range(2):
ax = plt.subplot(2, 1, ind_view + 1)
ax.set_title('View {}'.format(ind_view))
ind_feature = ind_view * 2
styles = ('.b', 'xb', '.r', 'xr')
labels = ('non-separated', 'separated')
for ind in range(4):
ind_class = ind // 2
label = labels[(ind + ind_view) % 2]
ax.plot(X[n_samples*ind:n_samples*(ind+1), ind_feature],
X[n_samples*ind:n_samples*(ind+1), ind_feature + 1],
styles[ind],
label='Class {} ({})'.format(ind_class, label))
ax.legend()
print('\nThe second figure displays the classification results for the '
'sub-classifiers\non the learning sample data.\n')
styles = ('.b', '.r')
# fig = plt.figure(figsize=(12., 7.))
# fig.suptitle('Classification results on the learning data for the '
# 'sub-classifiers', size=16)
# for ind_estimator in range(n_estimators):
# best_view = clf.best_views_[ind_estimator]
# y_pred = clf.estimators_[ind_estimator].predict(
# X[:, 2*best_view:2*best_view+2])
# background_color = (1.0, 1.0, 0.9)
# for ind_view in range(2):
# ax = plt.subplot(2, 3, ind_estimator + 3*ind_view + 1)
# if ind_view == best_view:
# ax.set_facecolor(background_color)
# ax.set_title(
# 'Sub-classifier {} - View {}'.format(ind_estimator, ind_view))
# ind_feature = ind_view * 2
# for ind_class in range(2):
# ind_samples = (y_pred == ind_class)
# ax.plot(X[ind_samples, ind_feature],
# X[ind_samples, ind_feature + 1],
# styles[ind_class],
# label='Class {}'.format(ind_class))
# ax.legend(title='Predicted class:')
plt.show()
.. note::
:class: sphx-glr-download-link-note
Click :ref:`here <sphx_glr_download_tutorial_auto_examples_cumbo_cumbo_plot_2_views_2_classes.py>` to download the full example code
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_tutorial_auto_examples_cumbo_cumbo_plot_2_views_2_classes.py:
==================================
MuCombo 2 views, 2 classes example
==================================
In this toy example, we generate data from two classes, split between two
two-dimensional views.
For each view, the data are generated so that half of the points of each class
are well separated in the plane, while the other half of the points are not
separated and placed in the same area. We also insure that the points that are
not separated in one view are well separated in the other view.
Thus, in the figure representing the data, the points represented by crosses
(x) are well separated in view 0 while they are not separated in view 1, while
the points represented by dots (.) are well separated in view 1 while they are
not separated in view 0. In this figure, the blue symbols represent points
of class 0, while red symbols represent points of class 1.
The MuCuMBo algorithm take adavantage of the complementarity of the two views to
rightly classify the points.
.. code-block:: default
import numpy as np
from multimodal.boosting.cumbo import MuCumboClassifier
from matplotlib import pyplot as plt
def generate_data(n_samples, lim):
"""Generate random data in a rectangle"""
lim = np.array(lim)
n_features = lim.shape[0]
data = np.random.random((n_samples, n_features))
data = (lim[:, 1]-lim[:, 0]) * data + lim[:, 0]
return data
seed = 12
np.random.seed(seed)
n_samples = 100
view_0 = np.concatenate((generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [1., 2.]])))
view_1 = np.concatenate((generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [1., 2.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]])))
X = np.concatenate((view_0, view_1), axis=1)
y = np.zeros(4*n_samples, dtype=np.int64)
y[2*n_samples:] = 1
views_ind = np.array([0, 2, 4])
n_estimators = 3
clf = MuCumboClassifier(n_estimators=n_estimators)
clf.fit(X, y, views_ind)
print('\nAfter 3 iterations, the MuCuMBo classifier reaches exact '
'classification for the\nlearning samples:')
for ind, score in enumerate(clf.staged_score(X, y)):
print(' - iteration {}, score: {}'.format(ind + 1, score))
print('\nThe resulting MuCuMBo classifier uses three sub-classifiers that are '
'wheighted\nusing the following weights:\n'
' estimator weights: {}'.format(clf.estimator_weights_alpha_))
# print('\nThe two first sub-classifiers use the data of view 0 to compute '
# 'their\nclassification results, while the third one uses the data of '
# 'view 1:\n'
# ' best views: {}'. format(clf.best_views_))
print('\nThe first figure displays the data, splitting the representation '
'between the\ntwo views.')
fig = plt.figure(figsize=(10., 8.))
fig.suptitle('Representation of the data', size=16)
for ind_view in range(2):
ax = plt.subplot(2, 1, ind_view + 1)
ax.set_title('View {}'.format(ind_view))
ind_feature = ind_view * 2
styles = ('.b', 'xb', '.r', 'xr')
labels = ('non-separated', 'separated')
for ind in range(4):
ind_class = ind // 2
label = labels[(ind + ind_view) % 2]
ax.plot(X[n_samples*ind:n_samples*(ind+1), ind_feature],
X[n_samples*ind:n_samples*(ind+1), ind_feature + 1],
styles[ind],
label='Class {} ({})'.format(ind_class, label))
ax.legend()
print('\nThe second figure displays the classification results for the '
'sub-classifiers\non the learning sample data.\n')
styles = ('.b', '.r')
# fig = plt.figure(figsize=(12., 7.))
# fig.suptitle('Classification results on the learning data for the '
# 'sub-classifiers', size=16)
# for ind_estimator in range(n_estimators):
# best_view = clf.best_views_[ind_estimator]
# y_pred = clf.estimators_[ind_estimator].predict(
# X[:, 2*best_view:2*best_view+2])
# background_color = (1.0, 1.0, 0.9)
# for ind_view in range(2):
# ax = plt.subplot(2, 3, ind_estimator + 3*ind_view + 1)
# if ind_view == best_view:
# ax.set_facecolor(background_color)
# ax.set_title(
# 'Sub-classifier {} - View {}'.format(ind_estimator, ind_view))
# ind_feature = ind_view * 2
# for ind_class in range(2):
# ind_samples = (y_pred == ind_class)
# ax.plot(X[ind_samples, ind_feature],
# X[ind_samples, ind_feature + 1],
# styles[ind_class],
# label='Class {}'.format(ind_class))
# ax.legend(title='Predicted class:')
plt.show()
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.000 seconds)
.. _sphx_glr_download_tutorial_auto_examples_cumbo_cumbo_plot_2_views_2_classes.py:
.. only :: html
.. container:: sphx-glr-footer
:class: sphx-glr-footer-example
.. container:: sphx-glr-download
:download:`Download Python source code: cumbo_plot_2_views_2_classes.py <cumbo_plot_2_views_2_classes.py>`
.. container:: sphx-glr-download
:download:`Download Jupyter notebook: cumbo_plot_2_views_2_classes.ipynb <cumbo_plot_2_views_2_classes.ipynb>`
.. only:: html
.. rst-class:: sphx-glr-signature
`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_
File deleted
%% Cell type:code id: tags:
``` python
%matplotlib inline
```
%% Cell type:markdown id: tags:
\n==================================\nMuCumbo 3 views, 3 classes example\n==================================\n\nIn this toy example, we generate data from three classes, split between three\ntwo-dimensional views.\n\nFor each view, the data are generated so that the points for two classes are\nwell seperated, while the points for the third class are not seperated with\nthe two other classes. That means that, taken separately, none of the single\nviews allows for a good classification of the data.\n\nNevertheless, the MuCuMbo algorithm take adavantage of the complementarity of\nthe views to rightly classify the points.
%% Cell type:code id: tags:
``` python
import numpy as np\nfrom multimodal.boosting.cumbo import MuCumboClassifier\nfrom matplotlib import pyplot as plt\n\n\ndef generate_data(n_samples, lim):\n """Generate random data in a rectangle"""\n lim = np.array(lim)\n n_features = lim.shape[0]\n data = np.random.random((n_samples, n_features))\n data = (lim[:, 1]-lim[:, 0]) * data + lim[:, 0]\n return data\n\n\nseed = 12\nnp.random.seed(seed)\n\nn_samples = 300\n\nview_0 = np.concatenate((generate_data(n_samples, [[0., 1.], [0., 1.]]),\n generate_data(n_samples, [[1., 2.], [0., 1.]]),\n generate_data(n_samples, [[0., 2.], [0., 1.]])))\n\nview_1 = np.concatenate((generate_data(n_samples, [[1., 2.], [0., 1.]]),\n generate_data(n_samples, [[0., 2.], [0., 1.]]),\n generate_data(n_samples, [[0., 1.], [0., 1.]])))\n\nview_2 = np.concatenate((generate_data(n_samples, [[0., 2.], [0., 1.]]),\n generate_data(n_samples, [[0., 1.], [0., 1.]]),\n generate_data(n_samples, [[1., 2.], [0., 1.]])))\n\nX = np.concatenate((view_0, view_1, view_2), axis=1)\n\ny = np.zeros(3*n_samples, dtype=np.int64)\ny[n_samples:2*n_samples] = 1\ny[2*n_samples:] = 2\n\nviews_ind = np.array([0, 2, 4, 6])\n\nn_estimators = 4\nclf = MuCumboClassifier(n_estimators=n_estimators)\nclf.fit(X, y, views_ind)\n\nprint('\nAfter 4 iterations, the MuCuMBo classifier reaches exact '\n 'classification for the\nlearning samples:')\nfor ind, score in enumerate(clf.staged_score(X, y)):\n print(' - iteration {}, score: {}'.format(ind + 1, score))\n\nprint('\nThe resulting MuCuMBo classifier uses four sub-classifiers that are '\n 'wheighted\nusing the following weights:\n'\n ' estimator weights alpha: {}'.format(clf.estimator_weights_alpha_))\n\n# print('\nThe first sub-classifier uses the data of view 0 to compute '\n# 'its classification\nresults, the second and third sub-classifiers use '\n# 'the data of view 1, while the\nfourth one uses the data of '\n# 'view 2:\n'\n# ' best views: {}'. format(clf.best_views_))\n\nprint('\nThe first figure displays the data, splitting the representation '\n 'between the\nthree views.')\n\nstyles = ('.b', '.r', '.g')\nfig = plt.figure(figsize=(12., 11.))\nfig.suptitle('Representation of the data', size=16)\nfor ind_view in range(3):\n ax = plt.subplot(3, 1, ind_view + 1)\n ax.set_title('View {}'.format(ind_view))\n ind_feature = ind_view * 2\n for ind_class in range(3):\n ind_samples = (y == ind_class)\n ax.plot(X[ind_samples, ind_feature],\n X[ind_samples, ind_feature + 1],\n styles[ind_class],\n label='Class {}'.format(ind_class))\n ax.legend(loc='upper left', framealpha=0.9)\n\nprint('\nThe second figure displays the classification results for the '\n 'sub-classifiers\non the learning sample data.\n')\n\n# fig = plt.figure(figsize=(14., 11.))\n# fig.suptitle('Classification results on the learning data for the '\n# 'sub-classifiers', size=16)\n# for ind_estimator in range(n_estimators):\n# best_view = clf.best_views_[ind_estimator]\n# y_pred = clf.estimators_[ind_estimator].predict(\n# X[:, 2*best_view:2*best_view+2])\n# background_color = (1.0, 1.0, 0.9)\n# for ind_view in range(3):\n# ax = plt.subplot(3, 4, ind_estimator + 4*ind_view + 1)\n# if ind_view == best_view:\n# ax.set_facecolor(background_color)\n# ax.set_title(\n# 'Sub-classifier {} - View {}'.format(ind_estimator, ind_view))\n# ind_feature = ind_view * 2\n# for ind_class in range(3):\n# ind_samples = (y_pred == ind_class)\n# ax.plot(X[ind_samples, ind_feature],\n# X[ind_samples, ind_feature + 1],\n# styles[ind_class],\n# label='Class {}'.format(ind_class))\n# ax.legend(title='Predicted class:', loc='upper left', framealpha=0.9)\n\nplt.show()
```
# -*- coding: utf-8 -*-
"""
==================================
MuCumbo 3 views, 3 classes example
==================================
In this toy example, we generate data from three classes, split between three
two-dimensional views.
For each view, the data are generated so that the points for two classes are
well seperated, while the points for the third class are not seperated with
the two other classes. That means that, taken separately, none of the single
views allows for a good classification of the data.
Nevertheless, the MuCuMbo algorithm take adavantage of the complementarity of
the views to rightly classify the points.
"""
import numpy as np
from multimodal.boosting.cumbo import MuCumboClassifier
from matplotlib import pyplot as plt
def generate_data(n_samples, lim):
"""Generate random data in a rectangle"""
lim = np.array(lim)
n_features = lim.shape[0]
data = np.random.random((n_samples, n_features))
data = (lim[:, 1]-lim[:, 0]) * data + lim[:, 0]
return data
seed = 12
np.random.seed(seed)
n_samples = 300
view_0 = np.concatenate((generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 2.], [0., 1.]])))
view_1 = np.concatenate((generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]])))
view_2 = np.concatenate((generate_data(n_samples, [[0., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[1., 2.], [0., 1.]])))
X = np.concatenate((view_0, view_1, view_2), axis=1)
y = np.zeros(3*n_samples, dtype=np.int64)
y[n_samples:2*n_samples] = 1
y[2*n_samples:] = 2
views_ind = np.array([0, 2, 4, 6])
n_estimators = 4
clf = MuCumboClassifier(n_estimators=n_estimators)
clf.fit(X, y, views_ind)
print('\nAfter 4 iterations, the MuCuMBo classifier reaches exact '
'classification for the\nlearning samples:')
for ind, score in enumerate(clf.staged_score(X, y)):
print(' - iteration {}, score: {}'.format(ind + 1, score))
print('\nThe resulting MuCuMBo classifier uses four sub-classifiers that are '
'wheighted\nusing the following weights:\n'
' estimator weights alpha: {}'.format(clf.estimator_weights_alpha_))
# print('\nThe first sub-classifier uses the data of view 0 to compute '
# 'its classification\nresults, the second and third sub-classifiers use '
# 'the data of view 1, while the\nfourth one uses the data of '
# 'view 2:\n'
# ' best views: {}'. format(clf.best_views_))
print('\nThe first figure displays the data, splitting the representation '
'between the\nthree views.')
styles = ('.b', '.r', '.g')
fig = plt.figure(figsize=(12., 11.))
fig.suptitle('Representation of the data', size=16)
for ind_view in range(3):
ax = plt.subplot(3, 1, ind_view + 1)
ax.set_title('View {}'.format(ind_view))
ind_feature = ind_view * 2
for ind_class in range(3):
ind_samples = (y == ind_class)
ax.plot(X[ind_samples, ind_feature],
X[ind_samples, ind_feature + 1],
styles[ind_class],
label='Class {}'.format(ind_class))
ax.legend(loc='upper left', framealpha=0.9)
print('\nThe second figure displays the classification results for the '
'sub-classifiers\non the learning sample data.\n')
# fig = plt.figure(figsize=(14., 11.))
# fig.suptitle('Classification results on the learning data for the '
# 'sub-classifiers', size=16)
# for ind_estimator in range(n_estimators):
# best_view = clf.best_views_[ind_estimator]
# y_pred = clf.estimators_[ind_estimator].predict(
# X[:, 2*best_view:2*best_view+2])
# background_color = (1.0, 1.0, 0.9)
# for ind_view in range(3):
# ax = plt.subplot(3, 4, ind_estimator + 4*ind_view + 1)
# if ind_view == best_view:
# ax.set_facecolor(background_color)
# ax.set_title(
# 'Sub-classifier {} - View {}'.format(ind_estimator, ind_view))
# ind_feature = ind_view * 2
# for ind_class in range(3):
# ind_samples = (y_pred == ind_class)
# ax.plot(X[ind_samples, ind_feature],
# X[ind_samples, ind_feature + 1],
# styles[ind_class],
# label='Class {}'.format(ind_class))
# ax.legend(title='Predicted class:', loc='upper left', framealpha=0.9)
plt.show()
.. note::
:class: sphx-glr-download-link-note
Click :ref:`here <sphx_glr_download_tutorial_auto_examples_cumbo_cumbo_plot_3_views_3_classes.py>` to download the full example code
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_tutorial_auto_examples_cumbo_cumbo_plot_3_views_3_classes.py:
==================================
MuCumbo 3 views, 3 classes example
==================================
In this toy example, we generate data from three classes, split between three
two-dimensional views.
For each view, the data are generated so that the points for two classes are
well seperated, while the points for the third class are not seperated with
the two other classes. That means that, taken separately, none of the single
views allows for a good classification of the data.
Nevertheless, the MuCuMbo algorithm take adavantage of the complementarity of
the views to rightly classify the points.
.. code-block:: default
import numpy as np
from multimodal.boosting.cumbo import MuCumboClassifier
from matplotlib import pyplot as plt
def generate_data(n_samples, lim):
"""Generate random data in a rectangle"""
lim = np.array(lim)
n_features = lim.shape[0]
data = np.random.random((n_samples, n_features))
data = (lim[:, 1]-lim[:, 0]) * data + lim[:, 0]
return data
seed = 12
np.random.seed(seed)
n_samples = 300
view_0 = np.concatenate((generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 2.], [0., 1.]])))
view_1 = np.concatenate((generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]])))
view_2 = np.concatenate((generate_data(n_samples, [[0., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[1., 2.], [0., 1.]])))
X = np.concatenate((view_0, view_1, view_2), axis=1)
y = np.zeros(3*n_samples, dtype=np.int64)
y[n_samples:2*n_samples] = 1
y[2*n_samples:] = 2
views_ind = np.array([0, 2, 4, 6])
n_estimators = 4
clf = MuCumboClassifier(n_estimators=n_estimators)
clf.fit(X, y, views_ind)
print('\nAfter 4 iterations, the MuCuMBo classifier reaches exact '
'classification for the\nlearning samples:')
for ind, score in enumerate(clf.staged_score(X, y)):
print(' - iteration {}, score: {}'.format(ind + 1, score))
print('\nThe resulting MuCuMBo classifier uses four sub-classifiers that are '
'wheighted\nusing the following weights:\n'
' estimator weights alpha: {}'.format(clf.estimator_weights_alpha_))
# print('\nThe first sub-classifier uses the data of view 0 to compute '
# 'its classification\nresults, the second and third sub-classifiers use '
# 'the data of view 1, while the\nfourth one uses the data of '
# 'view 2:\n'
# ' best views: {}'. format(clf.best_views_))
print('\nThe first figure displays the data, splitting the representation '
'between the\nthree views.')
styles = ('.b', '.r', '.g')
fig = plt.figure(figsize=(12., 11.))
fig.suptitle('Representation of the data', size=16)
for ind_view in range(3):
ax = plt.subplot(3, 1, ind_view + 1)
ax.set_title('View {}'.format(ind_view))
ind_feature = ind_view * 2
for ind_class in range(3):
ind_samples = (y == ind_class)
ax.plot(X[ind_samples, ind_feature],
X[ind_samples, ind_feature + 1],
styles[ind_class],
label='Class {}'.format(ind_class))
ax.legend(loc='upper left', framealpha=0.9)
print('\nThe second figure displays the classification results for the '
'sub-classifiers\non the learning sample data.\n')
# fig = plt.figure(figsize=(14., 11.))
# fig.suptitle('Classification results on the learning data for the '
# 'sub-classifiers', size=16)
# for ind_estimator in range(n_estimators):
# best_view = clf.best_views_[ind_estimator]
# y_pred = clf.estimators_[ind_estimator].predict(
# X[:, 2*best_view:2*best_view+2])
# background_color = (1.0, 1.0, 0.9)
# for ind_view in range(3):
# ax = plt.subplot(3, 4, ind_estimator + 4*ind_view + 1)
# if ind_view == best_view:
# ax.set_facecolor(background_color)
# ax.set_title(
# 'Sub-classifier {} - View {}'.format(ind_estimator, ind_view))
# ind_feature = ind_view * 2
# for ind_class in range(3):
# ind_samples = (y_pred == ind_class)
# ax.plot(X[ind_samples, ind_feature],
# X[ind_samples, ind_feature + 1],
# styles[ind_class],
# label='Class {}'.format(ind_class))
# ax.legend(title='Predicted class:', loc='upper left', framealpha=0.9)
plt.show()
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.000 seconds)
.. _sphx_glr_download_tutorial_auto_examples_cumbo_cumbo_plot_3_views_3_classes.py:
.. only :: html
.. container:: sphx-glr-footer
:class: sphx-glr-footer-example
.. container:: sphx-glr-download
:download:`Download Python source code: cumbo_plot_3_views_3_classes.py <cumbo_plot_3_views_3_classes.py>`
.. container:: sphx-glr-download
:download:`Download Jupyter notebook: cumbo_plot_3_views_3_classes.ipynb <cumbo_plot_3_views_3_classes.ipynb>`
.. only:: html
.. rst-class:: sphx-glr-signature
`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_
File deleted
%% Cell type:code id: tags:
``` python
%matplotlib inline
```
%% Cell type:markdown id: tags:
\n==================================\nMuCombo 2 views, 2 classes example\n==================================\n\nIn this toy example, we generate data from two classes, split between two\ntwo-dimensional views.\n\nFor each view, the data are generated so that half of the points of each class\nare well separated in the plane, while the other half of the points are not\nseparated and placed in the same area. We also insure that the points that are\nnot separated in one view are well separated in the other view.\n\nThus, in the figure representing the data, the points represented by crosses\n(x) are well separated in view 0 while they are not separated in view 1, while\nthe points represented by dots (.) are well separated in view 1 while they are\nnot separated in view 0. In this figure, the blue symbols represent points\nof class 0, while red symbols represent points of class 1.\n\nThe MuCuMBo algorithm take adavantage of the complementarity of the two views to\nrightly classify the points.
%% Cell type:code id: tags:
``` python
import numpy as np\nfrom multimodal.boosting.cumbo import MuCumboClassifier\nfrom matplotlib import pyplot as plt\n\n\ndef generate_data(n_samples, lim):\n """Generate random data in a rectangle"""\n lim = np.array(lim)\n n_features = lim.shape[0]\n data = np.random.random((n_samples, n_features))\n data = (lim[:, 1]-lim[:, 0]) * data + lim[:, 0]\n return data\n\n\nseed = 12\nnp.random.seed(seed)\n\nn_samples = 100\n\nview_0 = np.concatenate((generate_data(n_samples, [[0., 1.], [0., 1.]]),\n generate_data(n_samples, [[1., 2.], [0., 1.]]),\n generate_data(n_samples, [[0., 1.], [0., 1.]]),\n generate_data(n_samples, [[0., 1.], [1., 2.]])))\n\nview_1 = np.concatenate((generate_data(n_samples, [[1., 2.], [0., 1.]]),\n generate_data(n_samples, [[0., 1.], [0., 1.]]),\n generate_data(n_samples, [[0., 1.], [1., 2.]]),\n generate_data(n_samples, [[0., 1.], [0., 1.]])))\n\nX = np.concatenate((view_0, view_1), axis=1)\n\ny = np.zeros(4*n_samples, dtype=np.int64)\ny[2*n_samples:] = 1\n\nviews_ind = np.array([0, 2, 4])\n\nn_estimators = 3\nclf = MuCumboClassifier(n_estimators=n_estimators)\nclf.fit(X, y, views_ind)\n\nprint('\nAfter 3 iterations, the MuCuMBo classifier reaches exact '\n 'classification for the\nlearning samples:')\nfor ind, score in enumerate(clf.staged_score(X, y)):\n print(' - iteration {}, score: {}'.format(ind + 1, score))\n\n\nprint('\nThe resulting MuCuMBo classifier uses three sub-classifiers that are '\n 'wheighted\nusing the following weights:\n'\n ' estimator weights: {}'.format(clf.estimator_weights_alpha_))\n\n# print('\nThe two first sub-classifiers use the data of view 0 to compute '\n# 'their\nclassification results, while the third one uses the data of '\n# 'view 1:\n'\n# ' best views: {}'. format(clf.best_views_))\n\nprint('\nThe first figure displays the data, splitting the representation '\n 'between the\ntwo views.')\n\nfig = plt.figure(figsize=(10., 8.))\nfig.suptitle('Representation of the data', size=16)\nfor ind_view in range(2):\n ax = plt.subplot(2, 1, ind_view + 1)\n ax.set_title('View {}'.format(ind_view))\n ind_feature = ind_view * 2\n styles = ('.b', 'xb', '.r', 'xr')\n labels = ('non-separated', 'separated')\n for ind in range(4):\n ind_class = ind // 2\n label = labels[(ind + ind_view) % 2]\n ax.plot(X[n_samples*ind:n_samples*(ind+1), ind_feature],\n X[n_samples*ind:n_samples*(ind+1), ind_feature + 1],\n styles[ind],\n label='Class {} ({})'.format(ind_class, label))\n ax.legend()\n\nprint('\nThe second figure displays the classification results for the '\n 'sub-classifiers\non the learning sample data.\n')\n\nstyles = ('.b', '.r')\n# fig = plt.figure(figsize=(12., 7.))\n# fig.suptitle('Classification results on the learning data for the '\n# 'sub-classifiers', size=16)\n# for ind_estimator in range(n_estimators):\n# best_view = clf.best_views_[ind_estimator]\n# y_pred = clf.estimators_[ind_estimator].predict(\n# X[:, 2*best_view:2*best_view+2])\n# background_color = (1.0, 1.0, 0.9)\n# for ind_view in range(2):\n# ax = plt.subplot(2, 3, ind_estimator + 3*ind_view + 1)\n# if ind_view == best_view:\n# ax.set_facecolor(background_color)\n# ax.set_title(\n# 'Sub-classifier {} - View {}'.format(ind_estimator, ind_view))\n# ind_feature = ind_view * 2\n# for ind_class in range(2):\n# ind_samples = (y_pred == ind_class)\n# ax.plot(X[ind_samples, ind_feature],\n# X[ind_samples, ind_feature + 1],\n# styles[ind_class],\n# label='Class {}'.format(ind_class))\n# ax.legend(title='Predicted class:')\n\nplt.show()
```
# -*- coding: utf-8 -*-
"""
==================================
MuCombo 2 views, 2 classes example
==================================
In this toy example, we generate data from two classes, split between two
two-dimensional views.
For each view, the data are generated so that half of the points of each class
are well separated in the plane, while the other half of the points are not
separated and placed in the same area. We also insure that the points that are
not separated in one view are well separated in the other view.
Thus, in the figure representing the data, the points represented by crosses
(x) are well separated in view 0 while they are not separated in view 1, while
the points represented by dots (.) are well separated in view 1 while they are
not separated in view 0. In this figure, the blue symbols represent points
of class 0, while red symbols represent points of class 1.
The MuCuMBo algorithm take adavantage of the complementarity of the two views to
rightly classify the points.
"""
import numpy as np
from multimodal.boosting.cumbo import MuCumboClassifier
from matplotlib import pyplot as plt
def generate_data(n_samples, lim):
"""Generate random data in a rectangle"""
lim = np.array(lim)
n_features = lim.shape[0]
data = np.random.random((n_samples, n_features))
data = (lim[:, 1]-lim[:, 0]) * data + lim[:, 0]
return data
seed = 12
np.random.seed(seed)
n_samples = 100
view_0 = np.concatenate((generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [1., 2.]])))
view_1 = np.concatenate((generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [1., 2.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]])))
X = np.concatenate((view_0, view_1), axis=1)
y = np.zeros(4*n_samples, dtype=np.int64)
y[2*n_samples:] = 1
views_ind = np.array([0, 2, 4])
n_estimators = 3
clf = MuCumboClassifier(n_estimators=n_estimators)
clf.fit(X, y, views_ind)
print('\nAfter 3 iterations, the MuCuMBo classifier reaches exact '
'classification for the\nlearning samples:')
for ind, score in enumerate(clf.staged_score(X, y)):
print(' - iteration {}, score: {}'.format(ind + 1, score))
print('\nThe resulting MuCuMBo classifier uses three sub-classifiers that are '
'wheighted\nusing the following weights:\n'
' estimator weights: {}'.format(clf.estimator_weights_alpha_))
# print('\nThe two first sub-classifiers use the data of view 0 to compute '
# 'their\nclassification results, while the third one uses the data of '
# 'view 1:\n'
# ' best views: {}'. format(clf.best_views_))
print('\nThe first figure displays the data, splitting the representation '
'between the\ntwo views.')
fig = plt.figure(figsize=(10., 8.))
fig.suptitle('Representation of the data', size=16)
for ind_view in range(2):
ax = plt.subplot(2, 1, ind_view + 1)
ax.set_title('View {}'.format(ind_view))
ind_feature = ind_view * 2
styles = ('.b', 'xb', '.r', 'xr')
labels = ('non-separated', 'separated')
for ind in range(4):
ind_class = ind // 2
label = labels[(ind + ind_view) % 2]
ax.plot(X[n_samples*ind:n_samples*(ind+1), ind_feature],
X[n_samples*ind:n_samples*(ind+1), ind_feature + 1],
styles[ind],
label='Class {} ({})'.format(ind_class, label))
ax.legend()
print('\nThe second figure displays the classification results for the '
'sub-classifiers\non the learning sample data.\n')
styles = ('.b', '.r')
# fig = plt.figure(figsize=(12., 7.))
# fig.suptitle('Classification results on the learning data for the '
# 'sub-classifiers', size=16)
# for ind_estimator in range(n_estimators):
# best_view = clf.best_views_[ind_estimator]
# y_pred = clf.estimators_[ind_estimator].predict(
# X[:, 2*best_view:2*best_view+2])
# background_color = (1.0, 1.0, 0.9)
# for ind_view in range(2):
# ax = plt.subplot(2, 3, ind_estimator + 3*ind_view + 1)
# if ind_view == best_view:
# ax.set_facecolor(background_color)
# ax.set_title(
# 'Sub-classifier {} - View {}'.format(ind_estimator, ind_view))
# ind_feature = ind_view * 2
# for ind_class in range(2):
# ind_samples = (y_pred == ind_class)
# ax.plot(X[ind_samples, ind_feature],
# X[ind_samples, ind_feature + 1],
# styles[ind_class],
# label='Class {}'.format(ind_class))
# ax.legend(title='Predicted class:')
plt.show()
.. note::
:class: sphx-glr-download-link-note
Click :ref:`here <sphx_glr_download_tutorial_auto_examples_cumbo_plot_2_views_2_classes.py>` to download the full example code
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_tutorial_auto_examples_cumbo_plot_2_views_2_classes.py:
==================================
MuCombo 2 views, 2 classes example
==================================
In this toy example, we generate data from two classes, split between two
two-dimensional views.
For each view, the data are generated so that half of the points of each class
are well separated in the plane, while the other half of the points are not
separated and placed in the same area. We also insure that the points that are
not separated in one view are well separated in the other view.
Thus, in the figure representing the data, the points represented by crosses
(x) are well separated in view 0 while they are not separated in view 1, while
the points represented by dots (.) are well separated in view 1 while they are
not separated in view 0. In this figure, the blue symbols represent points
of class 0, while red symbols represent points of class 1.
The MuCuMBo algorithm take adavantage of the complementarity of the two views to
rightly classify the points.
.. code-block:: default
import numpy as np
from multimodal.boosting.cumbo import MuCumboClassifier
from matplotlib import pyplot as plt
def generate_data(n_samples, lim):
"""Generate random data in a rectangle"""
lim = np.array(lim)
n_features = lim.shape[0]
data = np.random.random((n_samples, n_features))
data = (lim[:, 1]-lim[:, 0]) * data + lim[:, 0]
return data
seed = 12
np.random.seed(seed)
n_samples = 100
view_0 = np.concatenate((generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [1., 2.]])))
view_1 = np.concatenate((generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [1., 2.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]])))
X = np.concatenate((view_0, view_1), axis=1)
y = np.zeros(4*n_samples, dtype=np.int64)
y[2*n_samples:] = 1
views_ind = np.array([0, 2, 4])
n_estimators = 3
clf = MuCumboClassifier(n_estimators=n_estimators)
clf.fit(X, y, views_ind)
print('\nAfter 3 iterations, the MuCuMBo classifier reaches exact '
'classification for the\nlearning samples:')
for ind, score in enumerate(clf.staged_score(X, y)):
print(' - iteration {}, score: {}'.format(ind + 1, score))
print('\nThe resulting MuCuMBo classifier uses three sub-classifiers that are '
'wheighted\nusing the following weights:\n'
' estimator weights: {}'.format(clf.estimator_weights_alpha_))
# print('\nThe two first sub-classifiers use the data of view 0 to compute '
# 'their\nclassification results, while the third one uses the data of '
# 'view 1:\n'
# ' best views: {}'. format(clf.best_views_))
print('\nThe first figure displays the data, splitting the representation '
'between the\ntwo views.')
fig = plt.figure(figsize=(10., 8.))
fig.suptitle('Representation of the data', size=16)
for ind_view in range(2):
ax = plt.subplot(2, 1, ind_view + 1)
ax.set_title('View {}'.format(ind_view))
ind_feature = ind_view * 2
styles = ('.b', 'xb', '.r', 'xr')
labels = ('non-separated', 'separated')
for ind in range(4):
ind_class = ind // 2
label = labels[(ind + ind_view) % 2]
ax.plot(X[n_samples*ind:n_samples*(ind+1), ind_feature],
X[n_samples*ind:n_samples*(ind+1), ind_feature + 1],
styles[ind],
label='Class {} ({})'.format(ind_class, label))
ax.legend()
print('\nThe second figure displays the classification results for the '
'sub-classifiers\non the learning sample data.\n')
styles = ('.b', '.r')
# fig = plt.figure(figsize=(12., 7.))
# fig.suptitle('Classification results on the learning data for the '
# 'sub-classifiers', size=16)
# for ind_estimator in range(n_estimators):
# best_view = clf.best_views_[ind_estimator]
# y_pred = clf.estimators_[ind_estimator].predict(
# X[:, 2*best_view:2*best_view+2])
# background_color = (1.0, 1.0, 0.9)
# for ind_view in range(2):
# ax = plt.subplot(2, 3, ind_estimator + 3*ind_view + 1)
# if ind_view == best_view:
# ax.set_facecolor(background_color)
# ax.set_title(
# 'Sub-classifier {} - View {}'.format(ind_estimator, ind_view))
# ind_feature = ind_view * 2
# for ind_class in range(2):
# ind_samples = (y_pred == ind_class)
# ax.plot(X[ind_samples, ind_feature],
# X[ind_samples, ind_feature + 1],
# styles[ind_class],
# label='Class {}'.format(ind_class))
# ax.legend(title='Predicted class:')
plt.show()
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.000 seconds)
.. _sphx_glr_download_tutorial_auto_examples_cumbo_plot_2_views_2_classes.py:
.. only :: html
.. container:: sphx-glr-footer
:class: sphx-glr-footer-example
.. container:: sphx-glr-download
:download:`Download Python source code: cumbo_plot_2_views_2_classes.py <cumbo_plot_2_views_2_classes.py>`
.. container:: sphx-glr-download
:download:`Download Jupyter notebook: cumbo_plot_2_views_2_classes.ipynb <cumbo_plot_2_views_2_classes.ipynb>`
.. only:: html
.. rst-class:: sphx-glr-signature
`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_
File deleted
%% Cell type:code id: tags:
``` python
%matplotlib inline
```
%% Cell type:markdown id: tags:
\n==================================\nMuCumbo 3 views, 3 classes example\n==================================\n\nIn this toy example, we generate data from three classes, split between three\ntwo-dimensional views.\n\nFor each view, the data are generated so that the points for two classes are\nwell seperated, while the points for the third class are not seperated with\nthe two other classes. That means that, taken separately, none of the single\nviews allows for a good classification of the data.\n\nNevertheless, the MuCuMbo algorithm take adavantage of the complementarity of\nthe views to rightly classify the points.
%% Cell type:code id: tags:
``` python
import numpy as np\nfrom multimodal.boosting.cumbo import MuCumboClassifier\nfrom matplotlib import pyplot as plt\n\n\ndef generate_data(n_samples, lim):\n """Generate random data in a rectangle"""\n lim = np.array(lim)\n n_features = lim.shape[0]\n data = np.random.random((n_samples, n_features))\n data = (lim[:, 1]-lim[:, 0]) * data + lim[:, 0]\n return data\n\n\nseed = 12\nnp.random.seed(seed)\n\nn_samples = 300\n\nview_0 = np.concatenate((generate_data(n_samples, [[0., 1.], [0., 1.]]),\n generate_data(n_samples, [[1., 2.], [0., 1.]]),\n generate_data(n_samples, [[0., 2.], [0., 1.]])))\n\nview_1 = np.concatenate((generate_data(n_samples, [[1., 2.], [0., 1.]]),\n generate_data(n_samples, [[0., 2.], [0., 1.]]),\n generate_data(n_samples, [[0., 1.], [0., 1.]])))\n\nview_2 = np.concatenate((generate_data(n_samples, [[0., 2.], [0., 1.]]),\n generate_data(n_samples, [[0., 1.], [0., 1.]]),\n generate_data(n_samples, [[1., 2.], [0., 1.]])))\n\nX = np.concatenate((view_0, view_1, view_2), axis=1)\n\ny = np.zeros(3*n_samples, dtype=np.int64)\ny[n_samples:2*n_samples] = 1\ny[2*n_samples:] = 2\n\nviews_ind = np.array([0, 2, 4, 6])\n\nn_estimators = 4\nclf = MuCumboClassifier(n_estimators=n_estimators)\nclf.fit(X, y, views_ind)\n\nprint('\nAfter 4 iterations, the MuCuMBo classifier reaches exact '\n 'classification for the\nlearning samples:')\nfor ind, score in enumerate(clf.staged_score(X, y)):\n print(' - iteration {}, score: {}'.format(ind + 1, score))\n\nprint('\nThe resulting MuCuMBo classifier uses four sub-classifiers that are '\n 'wheighted\nusing the following weights:\n'\n ' estimator weights alpha: {}'.format(clf.estimator_weights_alpha_))\n\n# print('\nThe first sub-classifier uses the data of view 0 to compute '\n# 'its classification\nresults, the second and third sub-classifiers use '\n# 'the data of view 1, while the\nfourth one uses the data of '\n# 'view 2:\n'\n# ' best views: {}'. format(clf.best_views_))\n\nprint('\nThe first figure displays the data, splitting the representation '\n 'between the\nthree views.')\n\nstyles = ('.b', '.r', '.g')\nfig = plt.figure(figsize=(12., 11.))\nfig.suptitle('Representation of the data', size=16)\nfor ind_view in range(3):\n ax = plt.subplot(3, 1, ind_view + 1)\n ax.set_title('View {}'.format(ind_view))\n ind_feature = ind_view * 2\n for ind_class in range(3):\n ind_samples = (y == ind_class)\n ax.plot(X[ind_samples, ind_feature],\n X[ind_samples, ind_feature + 1],\n styles[ind_class],\n label='Class {}'.format(ind_class))\n ax.legend(loc='upper left', framealpha=0.9)\n\nprint('\nThe second figure displays the classification results for the '\n 'sub-classifiers\non the learning sample data.\n')\n\n# fig = plt.figure(figsize=(14., 11.))\n# fig.suptitle('Classification results on the learning data for the '\n# 'sub-classifiers', size=16)\n# for ind_estimator in range(n_estimators):\n# best_view = clf.best_views_[ind_estimator]\n# y_pred = clf.estimators_[ind_estimator].predict(\n# X[:, 2*best_view:2*best_view+2])\n# background_color = (1.0, 1.0, 0.9)\n# for ind_view in range(3):\n# ax = plt.subplot(3, 4, ind_estimator + 4*ind_view + 1)\n# if ind_view == best_view:\n# ax.set_facecolor(background_color)\n# ax.set_title(\n# 'Sub-classifier {} - View {}'.format(ind_estimator, ind_view))\n# ind_feature = ind_view * 2\n# for ind_class in range(3):\n# ind_samples = (y_pred == ind_class)\n# ax.plot(X[ind_samples, ind_feature],\n# X[ind_samples, ind_feature + 1],\n# styles[ind_class],\n# label='Class {}'.format(ind_class))\n# ax.legend(title='Predicted class:', loc='upper left', framealpha=0.9)\n\nplt.show()
```
# -*- coding: utf-8 -*-
"""
==================================
MuCumbo 3 views, 3 classes example
==================================
In this toy example, we generate data from three classes, split between three
two-dimensional views.
For each view, the data are generated so that the points for two classes are
well seperated, while the points for the third class are not seperated with
the two other classes. That means that, taken separately, none of the single
views allows for a good classification of the data.
Nevertheless, the MuCuMbo algorithm take adavantage of the complementarity of
the views to rightly classify the points.
"""
import numpy as np
from multimodal.boosting.cumbo import MuCumboClassifier
from matplotlib import pyplot as plt
def generate_data(n_samples, lim):
"""Generate random data in a rectangle"""
lim = np.array(lim)
n_features = lim.shape[0]
data = np.random.random((n_samples, n_features))
data = (lim[:, 1]-lim[:, 0]) * data + lim[:, 0]
return data
seed = 12
np.random.seed(seed)
n_samples = 300
view_0 = np.concatenate((generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 2.], [0., 1.]])))
view_1 = np.concatenate((generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]])))
view_2 = np.concatenate((generate_data(n_samples, [[0., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[1., 2.], [0., 1.]])))
X = np.concatenate((view_0, view_1, view_2), axis=1)
y = np.zeros(3*n_samples, dtype=np.int64)
y[n_samples:2*n_samples] = 1
y[2*n_samples:] = 2
views_ind = np.array([0, 2, 4, 6])
n_estimators = 4
clf = MuCumboClassifier(n_estimators=n_estimators)
clf.fit(X, y, views_ind)
print('\nAfter 4 iterations, the MuCuMBo classifier reaches exact '
'classification for the\nlearning samples:')
for ind, score in enumerate(clf.staged_score(X, y)):
print(' - iteration {}, score: {}'.format(ind + 1, score))
print('\nThe resulting MuCuMBo classifier uses four sub-classifiers that are '
'wheighted\nusing the following weights:\n'
' estimator weights alpha: {}'.format(clf.estimator_weights_alpha_))
# print('\nThe first sub-classifier uses the data of view 0 to compute '
# 'its classification\nresults, the second and third sub-classifiers use '
# 'the data of view 1, while the\nfourth one uses the data of '
# 'view 2:\n'
# ' best views: {}'. format(clf.best_views_))
print('\nThe first figure displays the data, splitting the representation '
'between the\nthree views.')
styles = ('.b', '.r', '.g')
fig = plt.figure(figsize=(12., 11.))
fig.suptitle('Representation of the data', size=16)
for ind_view in range(3):
ax = plt.subplot(3, 1, ind_view + 1)
ax.set_title('View {}'.format(ind_view))
ind_feature = ind_view * 2
for ind_class in range(3):
ind_samples = (y == ind_class)
ax.plot(X[ind_samples, ind_feature],
X[ind_samples, ind_feature + 1],
styles[ind_class],
label='Class {}'.format(ind_class))
ax.legend(loc='upper left', framealpha=0.9)
print('\nThe second figure displays the classification results for the '
'sub-classifiers\non the learning sample data.\n')
# fig = plt.figure(figsize=(14., 11.))
# fig.suptitle('Classification results on the learning data for the '
# 'sub-classifiers', size=16)
# for ind_estimator in range(n_estimators):
# best_view = clf.best_views_[ind_estimator]
# y_pred = clf.estimators_[ind_estimator].predict(
# X[:, 2*best_view:2*best_view+2])
# background_color = (1.0, 1.0, 0.9)
# for ind_view in range(3):
# ax = plt.subplot(3, 4, ind_estimator + 4*ind_view + 1)
# if ind_view == best_view:
# ax.set_facecolor(background_color)
# ax.set_title(
# 'Sub-classifier {} - View {}'.format(ind_estimator, ind_view))
# ind_feature = ind_view * 2
# for ind_class in range(3):
# ind_samples = (y_pred == ind_class)
# ax.plot(X[ind_samples, ind_feature],
# X[ind_samples, ind_feature + 1],
# styles[ind_class],
# label='Class {}'.format(ind_class))
# ax.legend(title='Predicted class:', loc='upper left', framealpha=0.9)
plt.show()
.. note::
:class: sphx-glr-download-link-note
Click :ref:`here <sphx_glr_download_tutorial_auto_examples_cumbo_plot_3_views_3_classes.py>` to download the full example code
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_tutorial_auto_examples_cumbo_plot_3_views_3_classes.py:
==================================
MuCumbo 3 views, 3 classes example
==================================
In this toy example, we generate data from three classes, split between three
two-dimensional views.
For each view, the data are generated so that the points for two classes are
well seperated, while the points for the third class are not seperated with
the two other classes. That means that, taken separately, none of the single
views allows for a good classification of the data.
Nevertheless, the MuCuMbo algorithm take adavantage of the complementarity of
the views to rightly classify the points.
.. code-block:: default
import numpy as np
from multimodal.boosting.cumbo import MuCumboClassifier
from matplotlib import pyplot as plt
def generate_data(n_samples, lim):
"""Generate random data in a rectangle"""
lim = np.array(lim)
n_features = lim.shape[0]
data = np.random.random((n_samples, n_features))
data = (lim[:, 1]-lim[:, 0]) * data + lim[:, 0]
return data
seed = 12
np.random.seed(seed)
n_samples = 300
view_0 = np.concatenate((generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 2.], [0., 1.]])))
view_1 = np.concatenate((generate_data(n_samples, [[1., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]])))
view_2 = np.concatenate((generate_data(n_samples, [[0., 2.], [0., 1.]]),
generate_data(n_samples, [[0., 1.], [0., 1.]]),
generate_data(n_samples, [[1., 2.], [0., 1.]])))
X = np.concatenate((view_0, view_1, view_2), axis=1)
y = np.zeros(3*n_samples, dtype=np.int64)
y[n_samples:2*n_samples] = 1
y[2*n_samples:] = 2
views_ind = np.array([0, 2, 4, 6])
n_estimators = 4
clf = MuCumboClassifier(n_estimators=n_estimators)
clf.fit(X, y, views_ind)
print('\nAfter 4 iterations, the MuCuMBo classifier reaches exact '
'classification for the\nlearning samples:')
for ind, score in enumerate(clf.staged_score(X, y)):
print(' - iteration {}, score: {}'.format(ind + 1, score))
print('\nThe resulting MuCuMBo classifier uses four sub-classifiers that are '
'wheighted\nusing the following weights:\n'
' estimator weights alpha: {}'.format(clf.estimator_weights_alpha_))
# print('\nThe first sub-classifier uses the data of view 0 to compute '
# 'its classification\nresults, the second and third sub-classifiers use '
# 'the data of view 1, while the\nfourth one uses the data of '
# 'view 2:\n'
# ' best views: {}'. format(clf.best_views_))
print('\nThe first figure displays the data, splitting the representation '
'between the\nthree views.')
styles = ('.b', '.r', '.g')
fig = plt.figure(figsize=(12., 11.))
fig.suptitle('Representation of the data', size=16)
for ind_view in range(3):
ax = plt.subplot(3, 1, ind_view + 1)
ax.set_title('View {}'.format(ind_view))
ind_feature = ind_view * 2
for ind_class in range(3):
ind_samples = (y == ind_class)
ax.plot(X[ind_samples, ind_feature],
X[ind_samples, ind_feature + 1],
styles[ind_class],
label='Class {}'.format(ind_class))
ax.legend(loc='upper left', framealpha=0.9)
print('\nThe second figure displays the classification results for the '
'sub-classifiers\non the learning sample data.\n')
# fig = plt.figure(figsize=(14., 11.))
# fig.suptitle('Classification results on the learning data for the '
# 'sub-classifiers', size=16)
# for ind_estimator in range(n_estimators):
# best_view = clf.best_views_[ind_estimator]
# y_pred = clf.estimators_[ind_estimator].predict(
# X[:, 2*best_view:2*best_view+2])
# background_color = (1.0, 1.0, 0.9)
# for ind_view in range(3):
# ax = plt.subplot(3, 4, ind_estimator + 4*ind_view + 1)
# if ind_view == best_view:
# ax.set_facecolor(background_color)
# ax.set_title(
# 'Sub-classifier {} - View {}'.format(ind_estimator, ind_view))
# ind_feature = ind_view * 2
# for ind_class in range(3):
# ind_samples = (y_pred == ind_class)
# ax.plot(X[ind_samples, ind_feature],
# X[ind_samples, ind_feature + 1],
# styles[ind_class],
# label='Class {}'.format(ind_class))
# ax.legend(title='Predicted class:', loc='upper left', framealpha=0.9)
plt.show()
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.000 seconds)
.. _sphx_glr_download_tutorial_auto_examples_cumbo_plot_3_views_3_classes.py:
.. only :: html
.. container:: sphx-glr-footer
:class: sphx-glr-footer-example
.. container:: sphx-glr-download
:download:`Download Python source code: cumbo_plot_3_views_3_classes.py <cumbo_plot_3_views_3_classes.py>`
.. container:: sphx-glr-download
:download:`Download Jupyter notebook: cumbo_plot_3_views_3_classes.ipynb <cumbo_plot_3_views_3_classes.ipynb>`
.. only:: html
.. rst-class:: sphx-glr-signature
`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_
File deleted
...@@ -21,10 +21,6 @@ Multimodal Examples ...@@ -21,10 +21,6 @@ Multimodal Examples
.. _sphx_glr_tutorial_auto_examples_cumbo: .. _sphx_glr_tutorial_auto_examples_cumbo:
.. _examples:
Examples
========
MuCuMBo Examples MuCuMBo Examples
---------------- ----------------
...@@ -82,10 +78,6 @@ cooperation between views for classification. ...@@ -82,10 +78,6 @@ cooperation between views for classification.
.. _sphx_glr_tutorial_auto_examples_mumbo: .. _sphx_glr_tutorial_auto_examples_mumbo:
.. _examples:
Examples
========
MuMBo Examples MuMBo Examples
-------------- --------------
...@@ -143,13 +135,9 @@ cooperation between views for classification. ...@@ -143,13 +135,9 @@ cooperation between views for classification.
.. _sphx_glr_tutorial_auto_examples_mvml: .. _sphx_glr_tutorial_auto_examples_mvml:
.. _examples:
Examples
========
MVML MVML Examples
---- -------------
The following toy examples illustrate how the MVML algorithm The following toy examples illustrate how the MVML algorithm
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment