Skip to content
Snippets Groups Projects
Commit 6c9a54c3 authored by Baptiste Bauvin's avatar Baptiste Bauvin
Browse files

COmpleted the doc

parent 4b3341a7
No related branches found
No related tags found
No related merge requests found
...@@ -11,7 +11,7 @@ In order to do so, a fixed input format is used, and we choosed HDF5 as it allow ...@@ -11,7 +11,7 @@ In order to do so, a fixed input format is used, and we choosed HDF5 as it allow
The bare necessities The bare necessities
-------------------- --------------------
At the moment, in order for the platfro to work, the dataset must satisfy the following minimum requirements : At the moment, in order for the platform to work, the dataset must satisfy the following minimum requirements :
- Each example must be described in each view, with no missing data (you can use external tools to fill the gaps, or use only the fully-described examples of you dataset) - Each example must be described in each view, with no missing data (you can use external tools to fill the gaps, or use only the fully-described examples of you dataset)
- ? - ?
...@@ -31,7 +31,7 @@ So three matrices (200x100 ; 200x40 ; 200x55) make up the dataset. THe most usua ...@@ -31,7 +31,7 @@ So three matrices (200x100 ; 200x40 ; 200x55) make up the dataset. THe most usua
2. ``image.csv`` 2. ``image.csv``
3. ``commentary.csv``. 3. ``commentary.csv``.
LEt's suppose that all this data should be used to classify the examples in two classes : Animal or Object and that on has a ``labels.csv`` file wit one value for each example, a 0 if the example is an Animal and a 1 if it is an Object. Let's suppose that all this data should be used to classify the examples in two classes : Animal or Object and that on has a ``labels.csv`` file wit one value for each example, a 0 if the example is an Animal and a 1 if it is an Object.
In order to run a benchmark on this dataset, one has to format it using HDF5. In order to run a benchmark on this dataset, one has to format it using HDF5.
......
...@@ -45,6 +45,8 @@ Then, the :python:`__init__()` method of the :python:`AlgoClassifier` class wil ...@@ -45,6 +45,8 @@ Then, the :python:`__init__()` method of the :python:`AlgoClassifier` class wil
import Algo import Algo
from ..monoview.monoview_utils import BaseMonoviewClassifier, CustomUniform, CustomRandint from ..monoview.monoview_utils import BaseMonoviewClassifier, CustomUniform, CustomRandint
classifier_class_name = "AlgoClassifier"
class AlgoClassifier(Algo, BaseMonoviewClassifier): class AlgoClassifier(Algo, BaseMonoviewClassifier):
def __init__(self, random_sate=42, trade_off=0.5, norm_type='l1', max_depth=50) def __init__(self, random_sate=42, trade_off=0.5, norm_type='l1', max_depth=50)
...@@ -64,3 +66,48 @@ In this method, we added the needed attributes. See REF TO DOC OF DISTRIBS for t ...@@ -64,3 +66,48 @@ In this method, we added the needed attributes. See REF TO DOC OF DISTRIBS for t
If "algo" is implemented in a sklearn fashion, it is now usable in the platform. If "algo" is implemented in a sklearn fashion, it is now usable in the platform.
TODO interpretation TODO interpretation
More complex task : Adding a multiview classifier
-------------------------------------------------
.. code-block:: python
from mimbo import MimboClassifier
from ..multiview.multiview_utils import BaseMultiviewClassifier, \
get_examples_views_indices
from ..utils.hyper_parameter_search import CustomRandint
classifier_class_name = "Mimbo"
class Mimbo(BaseMultiviewClassifier, MimboClassifier):
def __init__(self, n_estimators=50,
random_state=None,
best_view_mode="edge"):
super().__init__(random_state)
super(BaseMultiviewClassifier, self).__init__(n_estimators=n_estimators,
random_state=random_state,
best_view_mode=best_view_mode)
self.param_names = ["n_estimators", "random_state", "best_view_mode"]
self.distribs = [CustomRandint(5,200), [random_state], ["edge", "error"]]
def fit(self, X, y, train_indices=None, view_indices=None):
train_indices, view_indices = get_examples_views_indices(X,
train_indices,
view_indices)
numpy_X, view_limits = X.to_numpy_array(example_indices=train_indices,
view_indices=view_indices)
return super(Mimbo, self).fit(numpy_X, y[train_indices],
view_limits)
def predict(self, X, example_indices=None, view_indices=None):
example_indices, view_indices = get_examples_views_indices(X,
example_indices,
view_indices)
numpy_X, view_limits = X.to_numpy_array(example_indices=example_indices,
view_indices=view_indices)
return super(Mimbo, self).predict(numpy_X)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment