Commit 284a9b08 authored by Ilya Lysenkov's avatar Ilya Lysenkov

Merged ml docs with 1.1 docs

parent f8597ceb
......@@ -95,7 +95,7 @@ The constructors.
* **CvBoost::LOGIT** LogitBoost. It can produce good regression fits.
* **CvBoost::GENTLE** Gentle AdaBoost. It puts less weight on outlier data points and for that reason is often good with regression data.
Often the "real" and "gentle" forms of AdaBoost work best.
Gentle AdaBoost and Real AdaBoost are often the preferable choices.
:param weak_count: The number of weak classifiers.
......@@ -109,10 +109,23 @@ Also there is one structure member that you can set directly:
Splitting criteria used to choose optimal splits during a weak tree construction. Possible values are:
* **CvBoost::DEFAULT** Use the default for the particular boosting method.
* **CvBoost::GINI** Default option for real AdaBoost.
* **CvBoost::MISCLASS** Default option for discrete AdaBoost.
* **CvBoost::SQERR** Least-square error; only option available for LogitBoost and gentle AdaBoost.
* **CvBoost::DEFAULT** Use the default for the particular boosting method, see below.
* **CvBoost::GINI** Use Gini index. This is default option for Real AdaBoost; may be also used for Discrete AdaBoost.
* **CvBoost::MISCLASS** Use misclassification rate. This is default option for Discrete AdaBoost; may be also used for Real AdaBoost.
* **CvBoost::SQERR** Use least squares criteria. This is default and the only option for LogitBoost and Gentle AdaBoost.
Default parameters are:
::
CvBoostParams::CvBoostParams()
{
boost_type = CvBoost::REAL;
weak_count = 100;
weight_trim_rate = 0.95;
cv_folds = 0;
max_depth = 1;
}
CvBoostTree
-----------
......
......@@ -70,11 +70,11 @@ The structure represents a possible decision tree node split. It has public memb
.. ocv:member:: int inversed
If it is not null then inverse split rule is used that is a left branch and a right branch are switched.
If it is not null then inverse split rule is used that is left and right branches are exchanged in the rule expressions below.
.. ocv:member:: float quality
Quality of the split.
The split quality, a positive number. It is used to choose the best primary split, then to choose and sort the surrogate splits. After the tree is constructed, it is also used to compute variable importance.
.. ocv:member:: CvDTreeSplit* next
......@@ -82,12 +82,27 @@ The structure represents a possible decision tree node split. It has public memb
.. ocv:member:: int subset[2]
Parameters of the split on a categorical variable.
Bit array indicating the value subset in case of split on a categorical variable. The rule is:
.. ocv:member:: struct {float c; int split_point;} ord
::
if var_value in subset
then next_node <- left
else next_node <- right
.. ocv:member:: float ord.c
The threshold value in case of split on an ordered variable. The rule is:
::
if var_value < c
then next_node<-left
else next_node<-right
Parameters of the split on ordered variable.
.. ocv:member:: int ord.split_point
Used internally by the training algorithm.
CvDTreeNode
-----------
......@@ -96,9 +111,13 @@ CvDTreeNode
The structure represents a node in a decision tree. It has public members:
.. ocv:member:: int class_idx
Class index normalized to 0..class_count-1 range and assigned to the node. It is used internally in classification trees and tree ensembles.
.. ocv:member:: int Tn
Tree index in a sequence of pruned trees. Nodes with :math:`Tn \leq CvDTree::pruned\_tree\_idx` are not used at prediction stage (they are pruned).
Tree index in a ordered sequence of pruned trees. The indices are used during and after the pruning procedure. The root node has the maximum value ``Tn`` of the whole tree, child nodes have ``Tn`` less than or equal to the parent's ``Tn``, and nodes with :math:`Tn \leq CvDTree::pruned\_tree\_idx` are not used at prediction stage (the corresponding branches are considered as cut-off), even if they have not been physically deleted from the tree at the pruning stage.
.. ocv:member:: double value
......@@ -122,20 +141,14 @@ The structure represents a node in a decision tree. It has public members:
.. ocv:mebmer:: int sample_count
Number of samples in the node.
The number of samples that fall into the node at the training stage. It is used to resolve the difficult cases - when the variable for the primary split is missing and all the variables for other surrogate splits are missing too. In this case the sample is directed to the left if ``left->sample_count > right->sample_count`` and to the right otherwise.
.. ocv:member:: int depth
Depth of the node.
Depth of the node. The root node depth is 0, the child nodes depth is the parent's depth + 1.
Other numerous fields of ``CvDTreeNode`` are used internally at the training stage.
CvDTreeTrainData
----------------
.. ocv:class:: CvDTreeTrainData
Decision tree training data and shared data for tree ensembles. ::
CvDTreeParams
-------------
.. ocv:class:: CvDTreeParams
......@@ -150,7 +163,7 @@ The constructors.
.. ocv:function:: CvDTreeParams::CvDTreeParams( int max_depth, int min_sample_count, float regression_accuracy, bool use_surrogates, int max_categories, int cv_folds, bool use_1se_rule, bool truncate_pruned_tree, const float* priors )
:param max_depth: The maximum number of levels in a tree. The depth of a constructed tree may be smaller due to other termination criterias or pruning of the tree.
:param max_depth: The maximum possible depth of the tree. That is the training algorithms attempts to split a node while its depth is less than ``max_depth``. The actual depth may be smaller if the other termination criteria are met (see the outline of the training procedure in the beginning of the section), and/or if the tree is pruned.
:param min_sample_count: If the number of samples in a node is less than this parameter then the node will not be splitted.
......@@ -158,15 +171,15 @@ The constructors.
:param use_surrogates: If true then surrogate splits will be built. These splits allow to work with missing data and compute variable importance correctly.
:param max_categories: Cluster possible values of a categorical variable into ``K`` :math:`\leq` ``max_categories`` clusters to find a suboptimal split. The clustering is applied only in n>2-class classification problems for categorical variables with ``N > max_categories`` possible values. See the Learning OpenCV book (page 489) for more detailed explanation.
:param max_categories: Cluster possible values of a categorical variable into ``K`` :math:`\leq` ``max_categories`` clusters to find a suboptimal split. If a discrete variable, on which the training procedure tries to make a split, takes more than ``max_categories`` values, the precise best subset estimation may take a very long time because the algorithm is exponential. Instead, many decision trees engines (including ML) try to find sub-optimal split in this case by clustering all the samples into ``max_categories`` clusters that is some categories are merged together. The clustering is applied only in ``n``>2-class classification problems for categorical variables with ``N > max_categories`` possible values. In case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases.
:param cv_folds: If ``cv_folds > 1`` then prune a tree with ``K``-fold cross-validation where ``K`` is equal to ``cv_folds``.
:param use_1se_rule: If true then a pruning will be harsher. This will make a tree more compact but a bit less accurate.
:param use_1se_rule: If true then a pruning will be harsher. This will make a tree more compact and more resistant to the training data noise but a bit less accurate.
:param truncate_pruned_tree: If true then pruned branches are removed completely from the tree. Otherwise they are retained and it is possible to get the unpruned tree or prune the tree differently by changing ``CvDTree::pruned_tree_idx`` parameter.
:param truncate_pruned_tree: If true then pruned branches are physically removed from the tree. Otherwise they are retained and it is possible to get results from the original unpruned (or pruned less aggressively) tree by decreasing ``CvDTree::pruned_tree_idx`` parameter.
:param priors: Weights of prediction categories which determine relative weights that you give to misclassification. That is, if the weight of the first category is 1 and the weight of the second category is 10, then each mistake in predicting the second category is equivalent to making 10 mistakes in predicting the first category.
:param priors: The array of a priori class probabilities, sorted by the class label value. The parameter can be used to tune the decision tree preferences toward a certain class. For example, if users want to detect some rare anomaly occurrence, the training base will likely contain much more normal cases than anomalies, so a very good classification performance will be achieved just by considering every case as normal. To avoid this, the priors can be specified, where the anomaly probability is artificially increased (up to 0.5 or even greater), so the weight of the misclassified anomalies becomes much bigger, and the tree is adjusted properly. You can also think about this parameter as weights of prediction categories which determine relative weights that you give to misclassification. That is, if the weight of the first category is 1 and the weight of the second category is 10, then each mistake in predicting the second category is equivalent to making 10 mistakes in predicting the first category.
The default constructor initializes all the parameters with the default values tuned for the standalone classification tree:
......
......@@ -102,13 +102,15 @@ The constructors
.. ocv:function:: CvEMParams::CvEMParams( int nclusters, int cov_mat_type=CvEM::COV_MAT_DIAGONAL, int start_step=CvEM::START_AUTO_STEP, CvTermCriteria term_crit=cvTermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, FLT_EPSILON), const CvMat* probs=0, const CvMat* weights=0, const CvMat* means=0, const CvMat** covs=0 )
:param nclusters: The number of mixtures in the gaussian mixture model.
:param nclusters: The number of mixture components in the gaussian mixture model. Some of EM implementation could determine the optimal number of mixtures within a specified value range, but that is not the case in ML yet.
:param cov_mat_type: Constraint on covariance matrices which defines type of matrices. Possible values are:
* **CvEM::COV_MAT_SPHERICAL** A scaled identity matrix :math:`\mu_k * I`.
* **CvEM::COV_MAT_DIAGONAL** A diagonal matrix with positive diagonal elements.
* **CvEM::COV_MAT_GENERIC** A symmetric positively defined matrix.
* **CvEM::COV_MAT_SPHERICAL** A scaled identity matrix :math:`\mu_k * I`. There is the only parameter :math:`\mu_k` to be estimated for earch matrix. The option may be used in special cases, when the constraint is relevant, or as a first step in the optimization (for example in case when the data is preprocessed with PCA). The results of such preliminary estimation may be passed again to the optimization procedure, this time with ``cov_mat_type=CvEM::COV_MAT_DIAGONAL``.
* **CvEM::COV_MAT_DIAGONAL** A diagonal matrix with positive diagonal elements. The number of free parameters is ``d`` for each matrix. This is most commonly used option yielding good estimation results.
* **CvEM::COV_MAT_GENERIC** A symmetric positively defined matrix. The number of free parameters in each matrix is about :math:`d^2/2`. It is not recommended to use this option, unless there is pretty accurate initial estimation of the parameters and/or a huge number of training samples.
:param start_step: The start step of the EM algorithm:
......@@ -118,13 +120,13 @@ The constructors
:param term_crit: The termination criteria of the EM algorithm. The EM algorithm can be terminated by the number of iterations ``term_crit.max_iter`` (number of M-steps) or when relative change of likelihood logarithm is less than ``term_crit.epsilon``.
:param probs: Initial probabilities :math:`p_{i,k}` of sample :math:`i` to belong to mixture :math:`k`. It is a floating-point matrix of :math:`nsamples \times nclusters` size.
:param probs: Initial probabilities :math:`p_{i,k}` of sample :math:`i` to belong to mixture :math:`k`. It is a floating-point matrix of :math:`nsamples \times nclusters` size. It is used and must be not NULL only when ``start_step=CvEM::START_M_STEP``.
:param weights: Initial weights of mixtures :math:`\pi_k`. It is a floating-point vector with :math:`nclusters` elements.
:param weights: Initial weights of mixtures :math:`\pi_k`. It is a floating-point vector with :math:`nclusters` elements. It is used (if not NULL) only when ``start_step=CvEM::START_E_STEP``.
:param means: Initial means of mixtures :math:`a_k`. It is a floating-point matrix of :math:`nclusters \times dims` size.
:param means: Initial means of mixtures :math:`a_k`. It is a floating-point matrix of :math:`nclusters \times dims` size. It is used used and must be not NULL only when ``start_step=CvEM::START_E_STEP``.
:param covs: Initial covariance matrices of mixtures :math:`S_k`. Each of covariance matrices is a valid square floating-point matrix of :math:`dims \times dims` size.
:param covs: Initial covariance matrices of mixtures :math:`S_k`. Each of covariance matrices is a valid square floating-point matrix of :math:`dims \times dims` size. It is used (if not NULL) only when ``start_step=CvEM::START_E_STEP``.
The default constructor represents a rough rule-of-the-thumb:
......
......@@ -21,7 +21,7 @@ Default and training constructors.
.. ocv:cfunction:: CvKNearest::CvKNearest( const CvMat* trainData, const CvMat* responses, const CvMat* sampleIdx=0, bool isRegression=false, int max_k=32 )
See :ocv:func:`CvKNearest::train` for parameters descriptions.
See :ocv:func:`CvKNearest::train` for additional parameters descriptions.
CvKNearest::train
-----------------
......
......@@ -113,11 +113,11 @@ The back-propagation algorithm parameters:
.. ocv:member:: double bp_dw_scale
Strength of the weight gradient term.
Strength of the weight gradient term. The recommended value is about 0.1.
.. ocv:member:: double bp_moment_scale
Strength of the momentum term.
Strength of the momentum term (the difference between weights on the 2 previous iterations). This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough
The RPROP algorithm parameters (see :ref:`[RPROP93] <RPROP93>` for details):
......@@ -127,19 +127,19 @@ The RPROP algorithm parameters (see :ref:`[RPROP93] <RPROP93>` for details):
.. ocv:member:: double rp_dw_plus
Increase factor :math:`\eta^+`.
Increase factor :math:`\eta^+`. It must be >1.
.. ocv:member:: double rp_dw_minus
Decrease factor :math:`\eta^-`.
Decrease factor :math:`\eta^-`. It must be <1.
.. ocv:member:: double rp_dw_min
Update-values lower limit :math:`\Delta_{min}`.
Update-values lower limit :math:`\Delta_{min}`. It must be positive.
.. ocv:member:: double rp_dw_max
Update-values upper limit :math:`\Delta_{max}`.
Update-values upper limit :math:`\Delta_{max}`. It must be >1.
CvANN_MLP_TrainParams::CvANN_MLP_TrainParams
......@@ -150,7 +150,7 @@ The constructors.
.. ocv:function:: CvANN_MLP_TrainParams::CvANN_MLP_TrainParams( CvTermCriteria term_crit, int train_method, double param1, double param2=0 )
:param term_crit: Termination criteria of the training algorithm. You can specify the maximum number of iterations (``max_iter``) and/or tolerance on the error change (``epsilon``).
:param term_crit: Termination criteria of the training algorithm. You can specify the maximum number of iterations (``max_iter``) and/or how much the error could change between the iterations to make the algorithm continue (``epsilon``).
:param train_method: Training method of the MLP. Possible values are:
......
......@@ -71,7 +71,7 @@ The constructors.
:param calc_var_importance: If true then variable importance will be calculated and then it can be retrieved by :ocv:func:`CvRTrees::get_var_importance`.
:param nactive_vars: The size of the randomly selected subset of features to be tested at any given node. If you set it to 0 then the size will be set to the square root of the total number of features.
:param nactive_vars: The size of the randomly selected subset of features at each tree node and that are used to find the best split(s). If you set it to 0 then the size will be set to the square root of the total number of features.
:param max_num_of_trees_in_the_forest: The maximum number of trees in the forest (suprise, suprise).
......
......@@ -46,7 +46,7 @@ The default constuctor.
.. ocv:function:: CvStatModel::CvStatModel()
Each statistical model class in ML has a default constructor without parameters. This constructor is useful for a two-stage model construction, when the default constructor is followed by ``train()`` or ``load()`` .
Each statistical model class in ML has a default constructor without parameters. This constructor is useful for a two-stage model construction, when the default constructor is followed by :ocv:func:`CvStatModel::train` or :ocv:func:`CvStatModel::load`.
CvStatModel::CvStatModel(...)
-----------------------------
......@@ -54,7 +54,7 @@ The training constructor.
.. ocv:function:: CvStatModel::CvStatModel( const Mat& train_data ... )
Most ML classes provide a single-step constructor and train constructors. This constructor is equivalent to the default constructor, followed by the ``train()`` method with the parameters that are passed to the constructor.
Most ML classes provide a single-step constructor and train constructors. This constructor is equivalent to the default constructor, followed by the :ocv:func:`CvStatModel::train` method with the parameters that are passed to the constructor.
CvStatModel::~CvStatModel
-------------------------
......@@ -73,7 +73,7 @@ The destructor of the base class is declared as virtual. So, it is safe to write
delete model;
Normally, the destructor of each derived class does nothing. But in this instance, it calls the overridden method ``clear()`` that deallocates all the memory.
Normally, the destructor of each derived class does nothing. But in this instance, it calls the overridden method :ocv:func:`CvStatModel::clear` that deallocates all the memory.
CvStatModel::clear
------------------
......@@ -81,7 +81,7 @@ Deallocates memory and resets the model state.
.. ocv:function:: void CvStatModel::clear()
The method ``clear`` does the same job as the destructor: it deallocates all the memory occupied by the class members. But the object itself is not destructed and can be reused further. This method is called from the destructor, from the ``train`` methods of the derived classes, from the methods ``load()``, ``read()``, or even explicitly by the user.
The method ``clear`` does the same job as the destructor: it deallocates all the memory occupied by the class members. But the object itself is not destructed and can be reused further. This method is called from the destructor, from the :ocv:func:`CvStatModel::train` methods of the derived classes, from the methods :ocv:func:`CvStatModel::load`, :ocv:func:`CvStatModel::read()``, or even explicitly by the user.
CvStatModel::save
-----------------
......@@ -101,7 +101,7 @@ Loads the model from a file.
.. ocv:pyfunction:: cv2.CvStatModel.load(filename[, name]) -> None
The method ``load`` loads the complete model state with the specified name (or default model-dependent name) from the specified XML or YAML file. The previous model state is cleared by ``clear()`` .
The method ``load`` loads the complete model state with the specified name (or default model-dependent name) from the specified XML or YAML file. The previous model state is cleared by :ocv:func:`CvStatModel::clear`.
CvStatModel::write
......@@ -110,7 +110,7 @@ Writes the model to the file storage.
.. ocv:function:: void CvStatModel::write( CvFileStorage* storage, const char* name )
The method ``write`` stores the complete model state in the file storage with the specified name or default name (which depends on the particular class). The method is called by ``save()`` .
The method ``write`` stores the complete model state in the file storage with the specified name or default name (which depends on the particular class). The method is called by :ocv:func:`CvStatModel::save`.
CvStatModel::read
......@@ -122,7 +122,7 @@ Reads the model from the file storage.
The method ``read`` restores the complete model state from the specified node of the file storage. Use the function
:ocv:func:`GetFileNodeByName` to locate the node.
The previous model state is cleared by ``clear()`` .
The previous model state is cleared by :ocv:func:`CvStatModel::clear`.
CvStatModel::train
------------------
......@@ -144,13 +144,13 @@ For classification problems, the responses are discrete class labels. For regres
* ``CV_VAR_ORDERED(=CV_VAR_NUMERICAL)`` The output values are ordered. This means that two different values can be compared as numbers, and this is a regression problem.
Types of input variables can be also specified using ``var_type`` . Most algorithms can handle only ordered input variables.
Types of input variables can be also specified using ``var_type``. Most algorithms can handle only ordered input variables.
Many ML models may be trained on a selected feature subset, and/or on a selected sample subset of the training set. To make it easier for you, the method ``train`` usually includes the ``var_idx`` and ``sample_idx`` parameters. The former parameter identifies variables (features) of interest, and the latter one identifies samples of interest. Both vectors are either integer ( ``CV_32SC1`` ) vectors (lists of 0-based indices) or 8-bit ( ``CV_8UC1`` ) masks of active variables/samples. You may pass ``NULL`` pointers instead of either of the arguments, meaning that all of the variables/samples are used for training.
Many ML models may be trained on a selected feature subset, and/or on a selected sample subset of the training set. To make it easier for you, the method ``train`` usually includes the ``var_idx`` and ``sample_idx`` parameters. The former parameter identifies variables (features) of interest, and the latter one identifies samples of interest. Both vectors are either integer (``CV_32SC1``) vectors (lists of 0-based indices) or 8-bit (``CV_8UC1``) masks of active variables/samples. You may pass ``NULL`` pointers instead of either of the arguments, meaning that all of the variables/samples are used for training.
Additionally, some algorithms can handle missing measurements, that is, when certain features of certain training samples have unknown values (for example, they forgot to measure a temperature of patient A on Monday). The parameter ``missing_mask`` , an 8-bit matrix of the same size as ``train_data`` , is used to mark the missed values (non-zero elements of the mask).
Additionally, some algorithms can handle missing measurements, that is, when certain features of certain training samples have unknown values (for example, they forgot to measure a temperature of patient A on Monday). The parameter ``missing_mask``, an 8-bit matrix of the same size as ``train_data``, is used to mark the missed values (non-zero elements of the mask).
Usually, the previous model state is cleared by ``clear()`` before running the training procedure. However, some algorithms may optionally update the model state with the new training data, instead of resetting it.
Usually, the previous model state is cleared by :ocv:func:`CvStatModel::clear` before running the training procedure. However, some algorithms may optionally update the model state with the new training data, instead of resetting it.
CvStatModel::predict
--------------------
......@@ -158,7 +158,7 @@ Predicts the response for a sample.
.. ocv:function:: float CvStatModel::predict( const Mat& sample[, <prediction_params>] ) const
The method is used to predict the response for a new sample. In case of a classification, the method returns the class label. In case of a regression, the method returns the output function value. The input sample must have as many components as the ``train_data`` passed to ``train`` contains. If the ``var_idx`` parameter is passed to ``train`` , it is remembered and then is used to extract only the necessary components from the input sample in the method ``predict`` .
The method is used to predict the response for a new sample. In case of a classification, the method returns the class label. In case of a regression, the method returns the output function value. The input sample must have as many components as the ``train_data`` passed to ``train`` contains. If the ``var_idx`` parameter is passed to ``train``, it is remembered and then is used to extract only the necessary components from the input sample in the method ``predict``.
The suffix ``const`` means that prediction does not affect the internal model state, so the method can be safely called from within different threads.
......@@ -111,19 +111,26 @@ The constructors.
:param svm_type: Type of a SVM formulation. Possible values are:
* **CvSVM::C_SVC** C-Support Vector Classification.
* **CvSVM::NU_SVC** :math:`\nu`-Support Vector Classification.
* **CvSVM::ONE_CLASS** Distribution Estimation (One-class SVM)
* **CvSVM::EPS_SVR** :math:`\epsilon`-Support Vector Regression
* **CvSVM::NU_SVR** :math:`\nu`-Support Vector Regression
* **CvSVM::C_SVC** C-Support Vector Classification. ``n``-class classification (``n`` :math:`\geq` 2), allows imperfect separation of classes with penalty multiplier ``C`` for outliers.
* **CvSVM::NU_SVC** :math:`\nu`-Support Vector Classification. ``n``-class classification with possible imperfect separation. Parameter :math:`\nu` (in the range 0..1, the larger the value, the smoother the decision boundary) is used instead of ``C``.
* **CvSVM::ONE_CLASS** Distribution Estimation (One-class SVM). All the training data are from the same class, SVM builds a boundary that separates the class from the rest of the feature space.
* **CvSVM::EPS_SVR** :math:`\epsilon`-Support Vector Regression. The distance between feature vectors from the training set and the fitting hyper-plane must be less than ``p``. For outliers the penalty multiplier ``C`` is used.
* **CvSVM::NU_SVR** :math:`\nu`-Support Vector Regression. :math:`\nu` is used instead of ``p``.
See :ref:`[LibSVM] <LibSVM>` for details.
:param kernel_type: Type of a SVM kernel. Possible values are:
* **CvSVM::LINEAR** Linear kernel: :math:`K(x_i, x_j) = x_i^T x_j`.
* **CvSVM::LINEAR** Linear kernel. No mapping is done, linear discrimination (or regression) is done in the original feature space. It is the fastest option. :math:`K(x_i, x_j) = x_i^T x_j`.
* **CvSVM::POLY** Polynomial kernel: :math:`K(x_i, x_j) = (\gamma x_i^T x_j + coef0)^{degree}, \gamma > 0`.
* **CvSVM::RBF** Radial basis function (RBF): :math:`K(x_i, x_j) = e^{-\gamma ||x_i - x_j||^2}, \gamma > 0`.
* **CvSVM::RBF** Radial basis function (RBF), a good choice in most cases. :math:`K(x_i, x_j) = e^{-\gamma ||x_i - x_j||^2}, \gamma > 0`.
* **CvSVM::SIGMOID** Sigmoid kernel: :math:`K(x_i, x_j) = \tanh(\gamma x_i^T x_j + coef0)`.
:param degree: Parameter ``degree`` of a kernel function (POLY).
......@@ -132,15 +139,15 @@ The constructors.
:param coef0: Parameter ``coef0`` of a kernel function (POLY / SIGMOID).
:param Cvalue: Parameter ``C`` of a SVM formulation (C_SVC / EPS_SVR / NU_SVR).
:param Cvalue: Parameter ``C`` of a SVM optimiazation problem (C_SVC / EPS_SVR / NU_SVR).
:param nu: Parameter :math:`\nu` of a SVM formulation (NU_SVC / ONE_CLASS / NU_SVR).
:param nu: Parameter :math:`\nu` of a SVM optimization problem (NU_SVC / ONE_CLASS / NU_SVR).
:param p: Parameter :math:`\epsilon` of a SVM formulation (EPS_SVR)
:param p: Parameter :math:`\epsilon` of a SVM optimization problem (EPS_SVR).
:param class_weights: Sets the parameter ``C`` of class ``#i`` to :math:`class\_weights_i * C` (C_SVC).
:param class_weights: Optional weights in the C_SVC problem , assigned to particular classes. They are multiplied by ``C`` so the parameter ``C`` of class ``#i`` becomes :math:`class\_weights_i * C`. Thus these weights affect the misclassification penalty for different classes. The larger weight, the larger penalty on misclassification of data from the corresponding class.
:param term_crit: Termination criteria of SVM training optimization loop: you can specify tolerance and/or the maximum number of iterations.
:param term_crit: Termination criteria of the iterative SVM training procedure which solves a partial case of constrained quadratic optimization problem. You can specify tolerance and/or the maximum number of iterations.
The default constructor initialize the structure with following values:
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment