Skip to content
Snippets Groups Projects
Commit 5a000a8d authored by Baptiste Bauvin's avatar Baptiste Bauvin
Browse files

First example OK

parent 497c1524
No related branches found
No related tags found
No related merge requests found
Showing
with 132 additions and 334 deletions
==================================== ====================================
First steps with Multiview platform First steps with Multiview Platform
==================================== ====================================
Context Context
...@@ -68,7 +67,7 @@ During the whole benchmark, the log file will be printed in the terminal. To sta ...@@ -68,7 +67,7 @@ During the whole benchmark, the log file will be printed in the terminal. To sta
The execution should take less than five minutes. We will first analyze the results and parse through the information the platform output. The execution should take less than five minutes. We will first analyze the results and parse through the information the platform output.
**Understanding on the results** **Understanding the results**
The result structure can be startling at first, but as the platform provides a lot of information, it has to be organized. The result structure can be startling at first, but as the platform provides a lot of information, it has to be organized.
...@@ -76,6 +75,7 @@ The results are stored in ``multiview_platform/examples/results/example_1/``. He ...@@ -76,6 +75,7 @@ The results are stored in ``multiview_platform/examples/results/example_1/``. He
Then, a directory with the amount of noise in the experiments, we didn't add any, so ``n_0/`` finally, a directory with Then, a directory with the amount of noise in the experiments, we didn't add any, so ``n_0/`` finally, a directory with
the date and time of the beginning of the experiment. Let's say you started the benchmark on the 25th of December 1560, the date and time of the beginning of the experiment. Let's say you started the benchmark on the 25th of December 1560,
at 03:42 PM, the directory's name should be ``started_1560_12_25-15_42/``. at 03:42 PM, the directory's name should be ``started_1560_12_25-15_42/``.
From here the result directory has the structure that follows : From here the result directory has the structure that follows :
.. code-block:: bash .. code-block:: bash
...@@ -107,7 +107,7 @@ From here the result directory has the structure that follows : ...@@ -107,7 +107,7 @@ From here the result directory has the structure that follows :
| | ├── ..] | | ├── ..]
| | ├── train_labels.csv | | ├── train_labels.csv
| │ └── train_indices.csv | │ └── train_indices.csv
| ├── *.log | ├── 1560_12_25-15_42-*-LOG.log
| ├── config_file.yml | ├── config_file.yml
| ├── 1560_12_25-15_42-*-accuracy_score.png | ├── 1560_12_25-15_42-*-accuracy_score.png
| ├── 1560_12_25-15_42-*-accuracy_score.csv | ├── 1560_12_25-15_42-*-accuracy_score.csv
...@@ -127,363 +127,101 @@ From here the result directory has the structure that follows : ...@@ -127,363 +127,101 @@ From here the result directory has the structure that follows :
| └── random_state.pickle | └── random_state.pickle
The structure can seem complex, but it priovides a lot of information, from the most general to the most precise.
**Process the method** Let's comment each file :
Here we choose to have two levels of decomposition, i.e two levels of details. We could also decide the approximate cardinality of the set of approximation coefficients.
.. code-block:: python
>>> iw.process_analysis(mod='step', steps=2) # To have two levels of decomposition, i.e 2 levels of details
>>> print(iw.process_analysis_flag) # True if the decomposition process has been done.
True
.. _User_exemple1:
Graphs and subgraphs ``1560_12_25-15_42-*-accuracy_score.png`` and ``1560_12_25-15_42-*-accuracy_score.csv``
-------------------- <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
We start with the main attribute ``tab_Multires`` of ``iw`` which contains the sequence of subgraphs and which also contains the basis.
.. code-block:: python These files contain the scores of each classifier for the accuracy metric, ordered with le best ones on the right and
the worst ones on the left, as an image or as as csv matrix.
The image version is as follows :
>>> tab=iw.tab_Multires .. figure:: ./images/accuracy.png
:scale: 25
The variable ``tab`` is a MemoryView which has three attributes. This is a bar plot showing the score on the training set (light gray), and testing set (dark gray). For each
monoview classifier, on each view and or each multiview classifier, the scores are printed over the name, under each bar.
It is highly recommended to click on the image to be able to zoom.
.. code-block:: python The csv file is a matrix with the score on train stored in the first row and the score on test stored in the second one. Each classifier is presented in a row. It is loadable with pandas.
>>> print(tab) Similar files have been generated for the f1 metric (``1560_12_25-15_42-*-f1_score.png`` and ``1560_12_25-15_42-*-f1_score.csv``).
<iw.multiresolution.struct_multires_Lbarre.Tab_Struct_multires_Lbarre object at 0x7f3186287e30>
**The attribute** ``steps``: it is the number of decomposition levels. ``1560_12_25-15_42-*-error_analysis_2D.png`` and ``1560_12_25-15_42-*-error_analysis_2D.html``
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
.. code-block:: python In these files, one can visualize the success or failure of each classifier on each example.
>>> print(tab.steps) # To get the number of decomposition levels Below, ``1560_12_25-15_42-*-error_analysis_2D.html`` is displayed.
2
It is the representation of a matrix, where the rows are the examples, and the columns are the classifiers.
**The attribute** ``Struct_Mres_gr``: it is the sequence of subgraphs which is as well a MemoryView. You can access to the different levels as follows: If a classifier (Lasso on the first view for example) missclassified an example (example number 75 for examples),
a black rectangle is printed in the row corresponding to example 75 and the column corresponding to Lasso-ViewNumber0,
and if the classifier successfully classified the example, a white rectangle is printed.
.. code-block:: python .. raw:: html
:file: ./images/error_2D.html
>>> subgraphs = tab.Struct_Mres_gr # To get the sequence of subgraphs This figure is the html version of the classifiers errors' visualization. It is interactive, so, by hovering it, the information on
>>> j0 = 0 each classifier and example is printed. The classifiers are ordered as follows:
>>> Sg = subgraphs[j0] # To get access to the subgraph at level j0+1
From left to right : all the monoview classifiers on ViewNumber0, all the ones on ViewNumber1, ..., then at the far right, the multiview classifiers
At each level ``j0`` it is possible to get: This html image is also available in ``.png`` format, but is then not interactive, so harder to analyze.
- **the list of vertices of the subgraph.** It is again a MemoryView to save memory. You can access the information using NumPy In terms of information, this is useful to detect possible outlier examples in the dataset and failing classifers.
.. code-block:: python For example, a mainly black horizontal line for an example means that it has been missclassified by most of the classifiers.
It could mean that the example is incorrectly labeled in the dataset or is very hard to classify.
>>> print(np.asarray(Sg.Xbarre)) # Indices of the vertices of the subgraph, drawn from the vertices of the seminal graph Symmetrically, a mainly-black column means that a classifier spectacularly failed on the asked task.
[ 0 1 3 4 5 7 10 14 15]
>>> # Recall that the subsampling of vertices is one realization of a random point process. The result changes each time you launch iw.process_analysis
*Watch out that if the level is not* ``j0 = 0`` *but* ``j0>0`` *the indices in* ``Sg.Xbarre`` *are taken among the set {0,.. nbarre-1} with nbarre the number of vertices of the graph at level j0-1. In other words the set* ``Sg.Xbarre`` *is not given as a subset of the vertices of the original graph, but of the graph it was drawn from.* On the figure displayed here, each view is visible as most monoview classifiers fail on the same examples inside the view.
It is an understandable behaviour as the Plausible dataset's view are generated and noised independently.
Morever, as confirmed by the accuracy graph, four monoview classifiers classified all the example to the same class,
and then, display a black half-column.
.. code-block:: python The data used to generate those matrices is available in ``1560_12_25-15_42-*-2D_plot_data.csv``
>>> ind_detailj0=np.asarray(Sg.Xbarre) ``1560_12_25-15_42-*-error_analysis_bar.png``
>>> # Indices of the vertices of the subgraph, drawn from the vertices of the seminal graph <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
>>> if j0>0: # To recover the indices in the original graph
for i in range(j0-1,-1,-1):
Xbarrei=np.asarray(subgraphs[i].Xbarre)
ind_detailj0=Xbarrei[ind_detailj0].copy()
This file is a different way to visualize the same information as the two previous ones. Indeed, it is a bar plot,
with a bar for each example, counting the number of classifiers that failed to classify this particular example.
.. figure:: ./images/bar_error.png
:scale: 25
The bar plot showing for each example how many classifiers failed on it.
- **the Laplacian matrix encoding the weights of the subgraph.** It is the generator of a continuous Markov chain, so this is a matrix based on the vertices of the subgraph and whose non diagonal entries are :math:`w(x,y)\geq 0` and diagonal entries are :math:`w(x) = -\sum\limits_{x\neq y}w(x,y)` The data used to generate this graph is available in ``1560_12_25-15_42-*-bar_plot_data.csv``
You can access to it as a sparse matrix. The fields ``Sg.rowLbarres, Sg.colLbarres, Sg.shapeLbarres`` allow it. ``config_file.yml``
<<<<<<<<<<<<<<<<<<<
.. code-block:: python
>>> Lbarre0s = Sg.Lbarres
>>> print(Lbarre0s) # It is again a MemoryView
<MemoryView of 'ndarray' object>
>>> # Let us get the sparse matrix
>>> Lbarre0ms = sp.coo_matrix((Lbarre0s,( Sg.rowLbarres, Sg.colLbarres)),
shape=(Sg.shapeLbarres, Sg.shapeLbarres))
>>> plt.figure() # Let us visualize the non vanishing coefficients
>>> plt.spy(Lbarre0ms, markersize=2)
>>> plt.title('Localization of non vanishing entries')
>>> plt.xlabel('Indices')
>>> plt.show()
.. figure:: ./images/spy_sub_graph_16.png
:scale: 50 %
Localization of the non vanishing coefficients of the Laplacian of the subgraph.
*Watch out that the Laplacian matrix of the graph is computed through a sparsification step from another Laplacian matrix, the Schur complement of the original Laplacian. The latter is also stored in* ``Sg`` *under the field* ``Sg.Lbarre``
.. code-block:: python
>>> Lbarre0 = Sg.Lbarre
>>> print(Lbarre0) # It is again a Memory view
<MemoryView of 'ndarray' object>
>>> # Let us get the sparse matrix
>>> Lbarre0m = sp.coo_matrix((Lbarre0,( Sg.rowLbarre, Sg.colLbarre)),
shape=(Sg.shapeLbarre, Sg.shapeLbarre))
>>> sp.linalg.norm(Lbarre0m-Lbarre0ms) # check the difference between the Schur complement and its sparsified version
0
>>> # Here the Schur complement and its sparsified version are the same.
Analysis and reconstruction operators
-------------------------------------
We come back to the attributes of ``tab``.
The third attribute of ``tab`` is ``Struct_Mana_re``. It is again a MemoryView object.
.. code-block:: python
>>> basis = tab.Struct_Mana_re
>>> print(basis)
<MemoryView of 'ndarray' object>
>>> l0 = 0 # To access to the functions of the first level (finest scale)
>>> a0 = basis[l0]
The attributes of ``basis`` store all the operators needed to analyse signals, ie. to compute wavelets coefficients, and the operators to reconstruct the signals given coefficients.
These objects beeing slightly more complicated to handle and not really useful in this experiment we do not explore them now more in details. If you want to know more there is a dedicated tutorial :ref:`User_exemple_analysis_recons`.
Process a signal
----------------
Computation of intertwining wavelet coefficients.
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
We will now process a signal.
**Signal input:** this is here a simple step function. To be processed by ``iw`` it has to be a 2d Numpy array, with possibly just one line.
.. code-block:: python
>>> n = 16
>>> Sig = np.zeros((1,n)) # Sig has to be a 2d NumPy array, here with just one line
>>> Sig[0,0:n//2] = 1
>>> print(Sig)
[[1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0.]]
Let us have a look on it.
.. code-block:: python
>>> plt.figure()
>>> plt.plot(Sig[0,:]) # Watch out that Sig is a 2d NumPy array
>>> plt.title('Original signal')
>>> plt.show()
.. figure:: ./images/Sig_16.png
:scale: 50 %
Original signal.
**Computation of the intertwining wavelet coefficients:**
This is done using the attribute of ``iw`` which is ``process_coefficients``. The output is a 2d NumPy array, with possibly one line.
.. code-block:: python
>>> coeffs_iw = iw.process_coefficients(Sig)
>>> print(coeffs_iw.shape)
(1, 16)
>>> print(coeffs_iw) # coeffs is again a 2d NumPy array
[[-2.55845734e-03 -1.78582022e-02 1.25000130e-01 1.78582022e-02
4.16493056e-04 4.16493056e-04 2.55845734e-03 1.84741585e-02
8.56532883e-01 9.78647881e-01 9.99267234e-01 9.99456183e-01
9.95570764e-01 8.68070076e-01 1.15588087e-02 2.15887658e-02]]
**Organization of the intertwining wavelet coefficients:**
The organization of the intertwining wavelet coefficients (IW coefficients) in the NumPy array ``coeffs_iw`` is as follows:
``coeffs_iw``:math:`=[[g_1,g_2,\dots,g_K,f_K]]`
with
- :math:`g_1`: the sequence of coefficients of the finest details level,
- :math:`g_K`: the sequence of coefficients of the coarsest details level,
- :math:`f_K` the sequence of scaling coefficients, or so called approximation coefficients.
The attribute ``following_size`` of ``iw`` gives the number of coefficients in each layer
.. code-block:: python
>>> levels_coeffs = np.asarray(iw.following_size) This is a copy of the configuration file used to run the experiment.
>>> print(levels_coeffs)
[7 1 8]
``random_state.pickle``
<<<<<<<<<<<<<<<<<<<<<<<
In our example A save of the numpy random state that was used for the experiment, it is mainly useful if no seed is specified in the config file.
- the finest details level :math:`g_1` has 7 coefficients, ``1560_12_25-15_42-*-LOG.log``
- the coarsest details level :math:`g_2` has 1 coefficients <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
- we have 8 approximation coefficients in :math:`f_2`.
We can also try to guess it on the plot of the IW coefficients since the details coefficients almost vanish. The log file
>>> plt.figure() Classifier-dependant files
>>> plt.plot(coeffs_iw[0,:],'*') # Watch out that coeffs is a 2d NumPy array
>>> plt.title('Intertwining wavelet coefficients')
>>> plt.show()
.. figure:: ./images/Coeffs_16.png
:scale: 50 %
IW coefficients.
*Remember our method is based on a random subsampling and thus the number of coefficients in each layer generally changes at each new run of* ``iw``. *But we compute a basis and thus the total number of coefficients is always the total number of vertices in the graph.*
Reconstruction of signals.
<<<<<<<<<<<<<<<<<<<<<<<<<< <<<<<<<<<<<<<<<<<<<<<<<<<<
The reconstruction of a signal from its IW coefficients is done using the attribute ``process_signal`` of ``iw``. For each classifier, at least one file is generated, called ``1560_12_25-15_42-*-summary.txt``.
**Reconstruction from the scaling coefficients.**
Let us look at the signal whose coefficients are the scaling coefficients. We will keep the 8 last coefficients, and put 0 for the other ones.
.. code-block:: python
>>> coeffs_approx = np.zeros((1,n))
>>> napprox = levels_coeffs[tab.steps]
>>> coeffs_approx[0,n-napprox:n] = coeffs_iw[0,n-napprox:n].copy() # these are the f_2 coefficients
>>> plt.figure()
>>> plt.plot(coeffs_approx[0,:],'*')
>>> plt.show()
.. figure:: ./images/Coeffs_approx_16.png
:scale: 50 %
Approximation coefficients.
Let us compute the approximation part from its scaling coefficients.
.. code-block:: python
>>> approx = iw.process_signal(coeffs_approx)
>>> plt.figure()
>>> plt.plot(approx[0,:])
>>> plt.title('approximation part')
>>> plt.show()
.. figure:: ./images/Sig_approx_16.png
:scale: 50 %
Approximation part: the vertex 15 and 0 are connected so we have a boundary effect on the approximation.
**Reconstruction from the finest detail coefficients.**
We need to extract the 7 first IW coefficients which corresponds to the finest detail coefficients.
.. code-block:: python
>>> coeffs_detail1 = np.zeros((1,n))
>>> ndetail1 = levels_coeffs[0]
>>> coeffs_detail1[0,0:ndetail1] = coeffs_iw[0,0:ndetail1].copy() # these are the g_1 coefficients
>>> print(coeffs_detail1)
[[-0.00255846 -0.0178582 0.12500013 0.0178582 0.00041649 0.00041649
0.00255846 0. 0. 0. 0. 0.
0. 0. 0. 0. ]]
Let us compute the finest detail contribution from its coefficients.
.. code-block:: python
>>> detail1 = iw.process_signal(coeffs_detail1)
>>> plt.figure()
>>> plt.plot(detail1[0,:])
>>> plt.plot(Sig[0,:],'--r')
>>> plt.title('finest detail part')
>>> plt.show()
.. figure:: ./images/Sig_detail1_16.png
:scale: 50 %
Finest detail part in blue, in red is the original signal. The detail part is localized and does not vanish on the discontinuity.
**Reconstruction from the coarsest detail coefficients.**
We need to extract the coefficients corresponding to the coarsest detail level.
.. code-block:: python
>>> coeffs_detail2 = np.zeros((1,n))
>>> coeffs_detail2[0,ndetail1:n-napprox] = coeffs_iw[0,ndetail1:n-napprox].copy() # these are the g_2 coefficients
>>> print(coeffs_detail2)
[[0. 0. 0. 0. 0. 0.
0. 0.01847416 0. 0. 0. 0.
0. 0. 0. 0. ]]
Let us compute the coarsest detail contribution from its coefficients
.. code-block:: python
>>> detail2 = iw.process_signal(coeffs_detail2)
>>> plt.figure()
>>> plt.plot(detail2[0,:])
>>> plt.title('coarsest detail part')
>>> plt.show()
.. figure:: ./images/Sig_detail2_16.png
:scale: 50 %
Coarsest detail part. We have some boundary effects due to the connection between vertex 15 and vertex 0 in the original graph.
**Exact reconstruction of the signal.**
As we expect the sum of the approximation, finest and coarsest detail parts, yields the signal, since we do not take into account insignificant numerical errors.
.. code-block:: python
>>> Sig_L = detail1 + detail2 + approx
>>> plt.figure()
>>> plt.subplot(2,1,1)
>>> plt.plot(Sig_L[0,:])
>>> plt.subplot(2,1,2)
>>> plt.plot(np.abs(Sig_L[0,:]-Sig[0,:]))
>>> plt.show()
.. figure:: ./images/Sig_L.png
On top the sum of the approximation, finest and coarsest details parts. Below the error between this reconstructed signal and the original one.
*The attribute* ``process_reconstruction_signal`` *of* ``iw`` *uses the analysis and reconstruction operators to compute the wavelet coefficients of the signal and reconstruct it from them. This is equivalent to run* ``iw.process_coefficients`` *and then* ``iw.process_signal`` *starting from the original signal.*
.. code-block:: python
>>> coeffs_iw = iw.process_coefficients(Sig)
>>> Sig_R = iw.process_signal(coeffs_iw)
>>> Sig_r = iw.process_reconstruction_signal(Sig)
>>> plt.figure()
>>> plt.subplot(2,1,1)
>>> plt.plot(Sig_R[0,:]-Sig_r[0,:])
>>> plt.subplot(2,1,2)
>>> plt.plot(np.abs(Sig_R[0,:]-Sig[0,:]))
>>> plt.show()
.. figure:: ./images/Sig_R.png
On top the difference between the signal reconstructed from ``coeffs`` and the output of ``iw.process_reconstruction_signal(Sig)``. Below the error between this reconstructed signal and the original one.
.. include:: ./images/summary.txt
:literal:
.. note:: This regroups the useful information on the classifiers configuration and it's performance. An interpretation section is
available for classifiers that present some interpretation-related information (as feature importance).
\ No newline at end of file
docs/source/tutorials/images/accuracy.png

166 KiB

docs/source/tutorials/images/bar_error.png

11.8 KiB

This diff is collapsed.
Classification on plausible database for ViewNumber0 with svm_linear.
f1_score on train : 0.631578947368421
f1_score on test : 0.9268292682926829
Database configuration :
- Database name : plausible
- View name : ViewNumber0 View shape : (100, 10)
- Learning Rate : 0.2
- Labels used : No, Yes
- Number of cross validation folds : 2
Classifier configuration :
- SVMLinearwith C : 0.4644332362255428, random_state : <mtrand.RandomState object at 0x7f30a3a1bb40>
- Executed on 1 core(s)
- Got configuration using randomized search with 2 iterations
For F1 score using None as sample_weights, None as labels, 1 as pos_label, binary as average (higher is better) :
- Score on train : 0.631578947368421
- Score on test : 0.9268292682926829
For Accuracy score using None as sample_weights (higher is better) :
- Score on train : 0.65
- Score on test : 0.925
Classification took 0:00:00
Classifier Interpretation :
...@@ -4,7 +4,7 @@ from ..multiview_classifiers.additions.late_fusion_utils import \ ...@@ -4,7 +4,7 @@ from ..multiview_classifiers.additions.late_fusion_utils import \
LateFusionClassifier LateFusionClassifier
from ..multiview.multiview_utils import get_examples_views_indices from ..multiview.multiview_utils import get_examples_views_indices
classifier_class_name = "bayesian_inference_fusion" classifier_class_name = "BayesianInferenceClassifier"
class BayesianInferenceClassifier(LateFusionClassifier): class BayesianInferenceClassifier(LateFusionClassifier):
......
...@@ -3,7 +3,7 @@ import numpy as np ...@@ -3,7 +3,7 @@ import numpy as np
from multiview_platform.mono_multi_view_classifiers.multiview_classifiers.additions.diversity_utils import GlobalDiversityFusionClassifier from multiview_platform.mono_multi_view_classifiers.multiview_classifiers.additions.diversity_utils import GlobalDiversityFusionClassifier
classifier_class_name = "difficulty_fusion" classifier_class_name = "DifficultyFusion"
class DifficultyFusion(GlobalDiversityFusionClassifier): class DifficultyFusion(GlobalDiversityFusionClassifier):
......
...@@ -3,7 +3,7 @@ import numpy as np ...@@ -3,7 +3,7 @@ import numpy as np
from multiview_platform.mono_multi_view_classifiers.multiview_classifiers.additions.diversity_utils import CoupleDiversityFusionClassifier from multiview_platform.mono_multi_view_classifiers.multiview_classifiers.additions.diversity_utils import CoupleDiversityFusionClassifier
classifier_class_name = "disagree_fusion" classifier_class_name = "DisagreeFusion"
class DisagreeFusion(CoupleDiversityFusionClassifier): class DisagreeFusion(CoupleDiversityFusionClassifier):
......
...@@ -3,7 +3,7 @@ import numpy as np ...@@ -3,7 +3,7 @@ import numpy as np
from multiview_platform.mono_multi_view_classifiers.multiview_classifiers.additions.diversity_utils import \ from multiview_platform.mono_multi_view_classifiers.multiview_classifiers.additions.diversity_utils import \
CoupleDiversityFusionClassifier CoupleDiversityFusionClassifier
classifier_class_name = "double_fault_fusion" classifier_class_name = "DoubleFaultFusion"
class DoubleFaultFusion(CoupleDiversityFusionClassifier): class DoubleFaultFusion(CoupleDiversityFusionClassifier):
......
...@@ -3,7 +3,7 @@ import numpy as np ...@@ -3,7 +3,7 @@ import numpy as np
from multiview_platform.mono_multi_view_classifiers.multiview_classifiers.additions.diversity_utils import GlobalDiversityFusionClassifier from multiview_platform.mono_multi_view_classifiers.multiview_classifiers.additions.diversity_utils import GlobalDiversityFusionClassifier
classifier_class_name = "entropy_fusion" classifier_class_name = "EntropyFusion"
class EntropyFusion(GlobalDiversityFusionClassifier): class EntropyFusion(GlobalDiversityFusionClassifier):
......
...@@ -4,7 +4,7 @@ from ..multiview_classifiers.additions.late_fusion_utils import LateFusionClassi ...@@ -4,7 +4,7 @@ from ..multiview_classifiers.additions.late_fusion_utils import LateFusionClassi
from ..multiview.multiview_utils import get_examples_views_indices from ..multiview.multiview_utils import get_examples_views_indices
classifier_class_name = "maority_voting_fusion" classifier_class_name = "MajorityVoting"
class VotingIndecision(Exception): class VotingIndecision(Exception):
pass pass
......
...@@ -11,7 +11,7 @@ from multiview_platform.mono_multi_view_classifiers.multiview_classifiers.additi ...@@ -11,7 +11,7 @@ from multiview_platform.mono_multi_view_classifiers.multiview_classifiers.additi
from multiview_platform.mono_multi_view_classifiers import monoview_classifiers from multiview_platform.mono_multi_view_classifiers import monoview_classifiers
classifier_class_name = "weighted_linear_early_fusion" classifier_class_name = "WeightedLinearEarlyFusion"
class WeightedLinearEarlyFusion(BaseMultiviewClassifier, BaseFusionClassifier): class WeightedLinearEarlyFusion(BaseMultiviewClassifier, BaseFusionClassifier):
......
...@@ -3,7 +3,7 @@ import numpy as np ...@@ -3,7 +3,7 @@ import numpy as np
from ..multiview_classifiers.additions.late_fusion_utils import LateFusionClassifier from ..multiview_classifiers.additions.late_fusion_utils import LateFusionClassifier
from ..multiview.multiview_utils import get_examples_views_indices from ..multiview.multiview_utils import get_examples_views_indices
classifier_class_name = "weighted_linear_late_fusion" classifier_class_name = "WeightedLinearLateFusion"
class WeightedLinearLateFusion(LateFusionClassifier): class WeightedLinearLateFusion(LateFusionClassifier):
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment