Commit 8b6fe3b2 authored by Vadim Pisarevsky's avatar Vadim Pisarevsky

integrated grammar fixes from tech writer (part 2)

parent 0bd3d6d2
......@@ -3,13 +3,15 @@
Boosting
========
.. highlight:: cpp
A common machine learning task is supervised learning. In supervised learning, the goal is to learn the functional relationship
:math:`F: y = F(x)` between the input
:math:`x` and the output
:math:`y` . Predicting the qualitative output is called classification, while predicting the quantitative output is called regression.
:math:`y` . Predicting the qualitative output is called *classification*, while predicting the quantitative output is called *regression*.
Boosting is a powerful learning concept that provides a solution to the supervised classification learning task. It combines the performance of many "weak" classifiers to produce a powerful 'committee'
:ref:`[HTF01] <HTF01>` . A weak classifier is only required to be better than chance, and thus can be very simple and computationally inexpensive. However, many of them smartly combine results to a strong classifier that often outperforms most "monolithic" strong classifiers such as SVMs and Neural Networks.??
Boosting is a powerful learning concept that provides a solution to the supervised classification learning task. It combines the performance of many "weak" classifiers to produce a powerful committee
:ref:`[HTF01] <HTF01>` . A weak classifier is only required to be better than chance, and thus can be very simple and computationally inexpensive. However, many of them smartly combine results to a strong classifier that often outperforms most "monolithic" strong classifiers such as SVMs and Neural Networks.
Decision trees are the most popular weak classifiers used in boosting schemes. Often the simplest decision trees with only a single split node per tree (called ``stumps`` ) are sufficient.
......@@ -19,10 +21,10 @@ The boosted model is based on
:math:`x_i \in{R^K}` and
:math:`y_i \in{-1, +1}` .
:math:`x_i` is a
:math:`K` -component vector. Each component encodes a feature relevant for the learning task at hand. The desired two-class output is encoded as -1 and +1.
:math:`K` -component vector. Each component encodes a feature relevant to the learning task at hand. The desired two-class output is encoded as -1 and +1.
Different variants of boosting are known as Discrete Adaboost, Real AdaBoost, LogitBoost, and Gentle AdaBoost
:ref:`[FHT98] <FHT98>` . All of them are very similar in their overall structure. Therefore, this chapter focuses only on the standard two-class Discrete AdaBoost algorithm as shown in the box below??. Each sample is initially assigned the same weight (step 2). Then, a weak classifier
:ref:`[FHT98] <FHT98>` . All of them are very similar in their overall structure. Therefore, this chapter focuses only on the standard two-class Discrete AdaBoost algorithm as shown in the box below??. Initially the same weight is assigned to each sample (step 2). Then, a weak classifier
:math:`f_{m(x)}` is trained on the weighted training data (step 3a). Its weighted training error and scaling factor
:math:`c_m` is computed (step 3b). The weights are increased for training samples that have been misclassified (step 3c). All weights are then normalized, and the process of finding the next weak classifier continues for another
:math:`M` -1 times. The final classifier
......@@ -63,34 +65,25 @@ Different variants of boosting are known as Discrete Adaboost, Real AdaBoost, Lo
Two-class Discrete AdaBoost Algorithm: Training (steps 1 to 3) and Evaluation (step 4)??you need to revise this section. what is this? a title for the image that is missing?
**NOTE:**
Similar to the classical boosting methods, the current implementation supports two-class classifiers only. For M
:math:`>` two classes, there is the
**AdaBoost.MH**
algorithm (described in
:ref:`[FHT98] <FHT98>` ) that reduces the problem to the two-class problem, yet with a much larger training set.
.. note:: Similar to the classical boosting methods, the current implementation supports two-class classifiers only. For M
:math:`>` two classes, there is the **AdaBoost.MH** algorithm (described in :ref:`[FHT98] <FHT98>` ) that reduces the problem to the two-class problem, yet with a much larger training set.
To reduce computation time for boosted models without substantially losing accuracy, the influence trimming technique may be employed. As the training algorithm proceeds and the number of trees in the ensemble is increased, a larger number of the training samples are classified correctly and with increasing confidence, thereby those samples receive smaller weights on the subsequent iterations. Examples with a very low relative weight have a small impact on the weak classifier training. Thus, such examples may be excluded during the weak classifier training without having much effect on the induced classifier. This process is controlled with the ``weight_trim_rate`` parameter. Only examples with the summary fraction ``weight_trim_rate`` of the total weight mass are used in the weak classifier training. Note that the weights for
To reduce computation time for boosted models without substantially losing accuracy, the influence trimming technique can be employed. As the training algorithm proceeds and the number of trees in the ensemble is increased, a larger number of the training samples are classified correctly and with increasing confidence, thereby those samples receive smaller weights on the subsequent iterations. Examples with a very low relative weight have a small impact on the weak classifier training. Thus, such examples may be excluded during the weak classifier training without having much effect on the induced classifier. This process is controlled with the ``weight_trim_rate`` parameter. Only examples with the summary fraction ``weight_trim_rate`` of the total weight mass are used in the weak classifier training. Note that the weights for
**all**
training examples are recomputed at each training iteration. Examples deleted at a particular iteration may be used again for learning some of the weak classifiers further
:ref:`[FHT98] <FHT98>` .
.. _HTF01:??what is this meant to be? it doesn't work
[HTF01] Hastie, T., Tibshirani, R., Friedman, J. H. *The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Series in Statistics*. 2001.
.. _FHT98:??the same comment
.. _HTF01:
[FHT98] Friedman, J. H., Hastie, T. and Tibshirani, R. Additive Logistic Regression: a Statistical View of Boosting. Technical Report, Dept. of Statistics*, Stanford University, 1998.
[HTF01] Hastie, T., Tibshirani, R., Friedman, J. H. *The Elements of Statistical Learning: Data Mining, Inference, and Prediction*. Springer Series in Statistics. 2001.
.. index:: CvBoostParams
.. _FHT98:
.. _CvBoostParams:
[FHT98] Friedman, J. H., Hastie, T. and Tibshirani, R. *Additive Logistic Regression: a Statistical View of Boosting*. Technical Report, Dept. of Statistics, Stanford University, 1998.
CvBoostParams
-------------
.. c:type:: CvBoostParams
.. ocv:class:: CvBoostParams
Boosting training parameters.
......@@ -98,10 +91,6 @@ The structure is derived from :ref:`CvDTreeParams` but not all of the decision t
All parameters are public. You can initialize them by a constructor and then override some of them directly if you want.
.. index:: CvBoostParams::CvBoostParams
.. _CvBoostParams::CvBoostParams:
CvBoostParams::CvBoostParams
----------------------------
.. ocv:function:: CvBoostParams::CvBoostParams()
......@@ -137,9 +126,9 @@ Also there is one parameter that you can set directly.
CvBoostTree
-----------
.. c:type:: CvBoostTree
.. ocv:class:: CvBoostTree
Weak tree classifier ::
Weak tree classifier. ::
class CvBoostTree: public CvDTree
{
......@@ -161,28 +150,21 @@ Weak tree classifier ::
The weak classifier, a component of the boosted tree classifier
:ref:`CvBoost` , is a derivative of
:ref:`CvDTree` . Normally, there is no need to use the weak classifiers directly. However, they can be accessed as elements of the sequence ``CvBoost::weak`` , retrieved by ``CvBoost::get_weak_predictors`` .
**Note:**
:ocv:class:`CvBoost` , is a derivative of
:ocv:class:`CvDTree` . Normally, there is no need to use the weak classifiers directly. However, they can be accessed as elements of the sequence ``CvBoost::weak`` , retrieved by ``CvBoost::get_weak_predictors`` .
.. note::
In case of LogitBoost and Gentle AdaBoost, each weak predictor is a regression tree, rather than a classification tree. Even in case of Discrete AdaBoost and Real AdaBoost, the ``CvBoostTree::predict`` return value ( ``CvDTreeNode::value`` ) is not an output class label. A negative value "votes" for class
#
0, a positive - for class
0, a positive value - for class
#
1. The votes are weighted. The weight of each individual tree may be increased or decreased using the method ``CvBoostTree::scale`` .
.. index:: CvBoost
CvBoost
-------
.. ocv:class:: CvBoost
Boosted tree classifier, derived from :ocv:class:`CvStatModel`
.. index:: CvBoost::train
.. _CvBoost::train:
Boosted tree classifier derived from :ocv:class:`CvStatModel`.
CvBoost::train
--------------
......@@ -192,10 +174,6 @@ CvBoost::train
The train method follows the common template. The last parameter ``update`` specifies whether the classifier needs to be updated (the new weak tree classifiers added to the existing ensemble) or the classifier needs to be rebuilt from scratch. The responses must be categorical, which means that boosted trees cannot be built for regression, and there should be two classes.
.. index:: CvBoost::predict
.. _CvBoost::predict:
CvBoost::predict
----------------
.. ocv:function:: float CvBoost::predict( const Mat& sample, const Mat& missing=Mat(), const Range& slice=Range::all(), bool rawMode=false, bool returnSum=false ) const
......@@ -204,10 +182,6 @@ CvBoost::predict
The method ``CvBoost::predict`` runs the sample through the trees in the ensemble and returns the output class label based on the weighted voting.
.. index:: CvBoost::prune
.. _CvBoost::prune:
CvBoost::prune
--------------
.. ocv:function:: void CvBoost::prune( CvSlice slice )
......@@ -216,18 +190,8 @@ CvBoost::prune
The method removes the specified weak classifiers from the sequence.
**Note:**
.. note:: Do not confuse this method with the pruning of individual decision trees, which is currently not supported.
Do not confuse this method with the pruning of individual decision trees, which is currently not supported.
.. index:: CvBoost::get_weak_predictors
.. _CvBoost::get_weak_predictors:
.. index:: CvBoost::calc_error
.. _CvBoost::calc_error:
CvBoost::calc_error
-------------------
......@@ -238,22 +202,13 @@ CvBoost::calc_error
The method is identical to :ocv:func:`CvDTree::calc_error` but uses the boosted tree classifier as predictor.
.. index:: CvBoost::get_weak_predictors
.. _CvBoost::get_weak_predictors:
CvBoost::get_weak_predictors
----------------------------
.. ocv:function:: CvSeq* CvBoost::get_weak_predictors()
Returns the sequence of weak tree classifiers.
The method returns the sequence of weak classifiers. Each element of the sequence is a pointer to the ``CvBoostTree`` class or, probably, to some of its derivatives.
.. index:: CvBoost::get_params
.. _CvBoost::get_params:
The method returns the sequence of weak classifiers. Each element of the sequence is a pointer to the ``CvBoostTree`` class or to some of its derivatives.
CvBoost::get_params
-------------------
......@@ -262,14 +217,8 @@ CvBoost::get_params
Returns current parameters of the boosted tree classifier.
.. index:: CvBoost::get_data
.. _CvBoost::get_data:
CvBoost::get_data
-----------------
.. ocv:function:: const CvDTreeTrainData* CvBoost::get_data() const
Returns used train data of the boosted tree classifier.
Expectation Maximization
========================
The EM (Expectation Maximization) algorithm estimates the parameters of the multivariate probability density function in the form of a Gaussian mixture distribution with a specified number of mixtures.
.. highlight:: cpp
The Expectation Maximization(EM) algorithm estimates the parameters of the multivariate probability density function in the form of a Gaussian mixture distribution with a specified number of mixtures.
Consider the set of the N feature vectors
{ :math:`x_1, x_2,...,x_{N}` } from a d-dimensional Euclidean space drawn from a Gaussian mixture:
......@@ -59,7 +61,7 @@ At the second step (Maximization step or M-step), the mixture parameter estimate
Alternatively, the algorithm may start with the M-step when the initial values for
:math:`p_{i,k}` can be provided. Another alternative when
:math:`p_{i,k}` are unknown is to use a simpler clustering algorithm to pre-cluster the input samples and thus obtain initial
:math:`p_{i,k}` . Often (including ML) the
:math:`p_{i,k}` . Often (including macnine learning) the
:ref:`kmeans` algorithm is used for that purpose.
One of the main problems of the EM algorithm is a large number
......@@ -83,22 +85,16 @@ already a good enough approximation).
*
Bilmes98 J. A. Bilmes. *A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models*. Technical Report TR-97-021, International Computer Science Institute and Computer Science Division, University of California at Berkeley, April 1998.
.. index:: CvEMParams
.. _CvEMParams:
CvEMParams
----------
.. c:type:: CvEMParams
.. ocv:class:: CvEMParams
Parameters of the EM algorithm.
All parameters are public. You can initialize them by a constructor and then override some of them directly if you want.
.. index:: CvEMParams::CvEMParams
.. _CvEMParams::CvEMParams:
CvEMParams::CvEMParams
----------------------
......@@ -144,22 +140,12 @@ The default constructor represents a rough rule-of-the-thumb:
With another contstructor it is possible to override a variety of parameters from a single number of mixtures (the only essential problem-dependent parameter) to initial values for the mixture parameters.
.. index:: CvEM
.. _CvEM:
CvEM
----
.. c:type:: CvEM
The EM model.
The class implements the EM algorithm as described in the beginning of this section.
.. ocv:class:: CvEM
.. index:: CvEM::train
The class implements the EM algorithm as described in the beginning of this section.
.. _CvEM::train:
CvEM::train
-----------
......@@ -177,20 +163,21 @@ CvEM::train
:param labels: The optional output "class label" for each sample: :math:`\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N` (indices of the most probable mixture component for each sample).
Estimates the Gaussian mixture parameters from a sample set.
Unlike many of the ML models, EM is an unsupervised learning algorithm and it does not take responses (class labels or function values) as input. Instead, it computes the
*Maximum Likelihood Estimate* of the Gaussian mixture parameters from an input sample set, stores all the parameters inside the structure:
:math:`p_{i,k}` in ``probs``,
:math:`a_k` in ``means`` ,
:math:`S_k` in ``covs[k]``,
:math:`\pi_k` in ``weights`` ,
:math:`\pi_k` in ``weights`` , and optionally computes the output "class label" for each sample:
:math:`\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N` (indices of the most probable mixture for each sample).
The trained model can be used further for prediction, just like any other classifier. The trained model is similar to the
:ref:`Bayes classifier`.
For an example of clustering random samples of the multi-Gaussian distribution using EM, see ``em.cpp`` sample in the OpenCV distribution.
.. index:: CvEM::predict
.. _CvEM::predict:
CvEM::predict
-------------
......@@ -205,10 +192,6 @@ CvEM::predict
:param probs: If it is not null then the method will write posterior probabilities of each component given the sample data to this parameter.
.. index:: CvEM::getNClusters
.. _CvEM::getNClusters:
CvEM::getNClusters
------------------
.. ocv:function:: int CvEM::getNClusters() const
......@@ -218,10 +201,6 @@ CvEM::getNClusters
Returns the number of mixture components :math:`M` in the gaussian mixture model.
.. index:: CvEM::getMeans
.. _CvEM::getMeans:
CvEM::getNClusters
------------------
.. ocv:function:: Mat CvEM::getMeans() const
......@@ -231,10 +210,6 @@ CvEM::getNClusters
Returns mixture means :math:`a_k`.
.. index:: CvEM::getCovs
.. _CvEM::getCovs:
CvEM::getCovs
-------------
.. ocv:function:: void CvEM::getCovs(std::vector<cv::Mat>& covs) const
......@@ -244,10 +219,6 @@ CvEM::getCovs
Returns mixture covariance matrices :math:`S_k`.
.. index:: CvEM::getWeights
.. _CvEM::getWeights:
CvEM::getWeights
----------------
.. ocv:function:: Mat CvEM::getWeights() const
......@@ -257,10 +228,6 @@ CvEM::getWeights
Returns mixture weights :math:`\pi_k`.
.. index:: CvEM::getProbs
.. _CvEM::getProbs:
CvEM::getProbs
--------------
.. ocv:function:: Mat CvEM::getProbs() const
......@@ -270,10 +237,6 @@ CvEM::getProbs
Returns probabilites :math:`p_{i,k}` of sample :math:`i` to belong to a mixture component :math:`k`.
.. index:: CvEM::getLikelihood
.. _CvEM::getLikelihood:
CvEM::getLikelihood
-------------------
.. ocv:function:: double CvEM::getLikelihood() const
......@@ -283,10 +246,6 @@ CvEM::getLikelihood
Returns logarithm of likelihood.
.. index:: CvEM::getLikelihoodDelta
.. _CvEM::getLikelihoodDelta:
CvEM::getLikelihoodDelta
------------------------
.. ocv:function:: double CvEM::getLikelihoodDelta() const
......@@ -296,10 +255,6 @@ CvEM::getLikelihoodDelta
Returns difference between logarithm of likelihood on the last iteration and logarithm of likelihood on the previous iteration.
.. index:: CvEM::write_params
.. _CvEM::write_params:
CvEM::write_params
------------------
.. ocv:function:: void CvEM::write_params( CvFileStorage* fs ) const
......@@ -309,10 +264,6 @@ CvEM::write_params
:param fs: A file storage where parameters will be written.
.. index:: CvEM::read_params
.. _CvEM::read_params:
CvEM::read_params
-----------------
.. ocv:function:: void CvEM::read_params( CvFileStorage* fs, CvFileNode* node )
......@@ -329,4 +280,3 @@ Read parameters will be used for the EM algorithm in this ``CvEM`` object.
For example of clustering random samples of multi-Gaussian distribution using EM see em.cpp sample in OpenCV distribution.
......@@ -7,9 +7,9 @@ Statistical Models
CvStatModel
-----------
.. c:type:: CvStatModel
.. ocv:class:: CvStatModel
Base class for statistical models in ML ::
Base class for statistical models in ML. ::
class CvStatModel
{
......@@ -40,21 +40,13 @@ Base class for statistical models in ML ::
In this declaration, some methods are commented off. These are methods for which there is no unified API (with the exception of the default constructor). However, there are many similarities in the syntax and semantics that are briefly described below in this section, as if they are part of the base class.
.. index:: CvStatModel::CvStatModel
.. _CvStatModel::CvStatModel:
CvStatModel::CvStatModel
------------------------
.. ocv:function:: CvStatModel::CvStatModel()
Serves as a default constructor.
Each statistical model class in ML has a default constructor without parameters. This constructor is useful for a 2-stage model construction, when the default constructor is followed by ``train()`` or ``load()`` .
.. index:: CvStatModel::CvStatModel(...)
.. _CvStatModel::CvStatModel(...):
Each statistical model class in ML has a default constructor without parameters. This constructor is useful for a two-stage model construction, when the default constructor is followed by ``train()`` or ``load()`` .
CvStatModel::CvStatModel(...)
-----------------------------
......@@ -64,10 +56,6 @@ CvStatModel::CvStatModel(...)
Most ML classes provide a single-step constructor and train constructors. This constructor is equivalent to the default constructor, followed by the ``train()`` method with the parameters that are passed to the constructor.
.. index:: CvStatModel::~CvStatModel
.. _CvStatModel::~CvStatModel:
CvStatModel::~CvStatModel
-------------------------
.. ocv:function:: CvStatModel::~CvStatModel()
......@@ -87,10 +75,6 @@ The destructor of the base class is declared as virtual. So, it is safe to write
Normally, the destructor of each derived class does nothing. But in this instance, it calls the overridden method ``clear()`` that deallocates all the memory.
.. index:: CvStatModel::clear
.. _CvStatModel::clear:
CvStatModel::clear
------------------
.. ocv:function:: void CvStatModel::clear()
......@@ -99,10 +83,6 @@ CvStatModel::clear
The method ``clear`` does the same job as the destructor: it deallocates all the memory occupied by the class members. But the object itself is not destructed and can be reused further. This method is called from the destructor, from the ``train`` methods of the derived classes, from the methods ``load()``, ``read()``, or even explicitly by the user.
.. index:: CvStatModel::save
.. _CvStatModel::save:
CvStatModel::save
-----------------
.. ocv:function:: void CvStatModel::save( const char* filename, const char* name=0 )
......@@ -111,10 +91,6 @@ CvStatModel::save
The method ``save`` saves the complete model state to the specified XML or YAML file with the specified name or default name (which depends on a particular class). *Data persistence* functionality from ``CxCore`` is used.
.. index:: CvStatModel::load
.. _CvStatModel::load:
CvStatModel::load
-----------------
.. ocv:function:: void CvStatModel::load( const char* filename, const char* name=0 )
......@@ -124,10 +100,6 @@ CvStatModel::load
The method ``load`` loads the complete model state with the specified name (or default model-dependent name) from the specified XML or YAML file. The previous model state is cleared by ``clear()`` .
.. index:: CvStatModel::write
.. _CvStatModel::write:
CvStatModel::write
------------------
.. ocv:function:: void CvStatModel::write( CvFileStorage* storage, const char* name )
......@@ -136,9 +108,6 @@ CvStatModel::write
The method ``write`` stores the complete model state in the file storage with the specified name or default name (which depends on the particular class). The method is called by ``save()`` .
.. index:: CvStatModel::read
.. _CvStatModel::read:
CvStatModel::read
-----------------
......@@ -147,14 +116,10 @@ CvStatModel::read
Reads the model from the file storage.
The method ``read`` restores the complete model state from the specified node of the file storage. Use the function
:ref:`GetFileNodeByName` to locate the node.
:ocv:func:`GetFileNodeByName` to locate the node.
The previous model state is cleared by ``clear()`` .
.. index:: CvStatModel::train
.. _CvStatModel::train:
CvStatModel::train
------------------
.. ocv:function:: bool CvStatModel::train( const Mat& train_data, [int tflag,] ..., const Mat& responses, ..., [const Mat& var_idx,] ..., [const Mat& sample_idx,] ... [const Mat& var_type,] ..., [const Mat& missing_mask,] <misc_training_alg_params> ... )
......@@ -167,7 +132,7 @@ The method trains the statistical model using a set of input feature vectors and
* ``tflag=CV_COL_SAMPLE`` The feature vectors are stored as columns.
The ``train_data`` must have the ``CV_32FC1`` (32-bit floating-point, single-channel) format. Responses are usually stored in the 1D vector (a row or a column) of ``CV_32SC1`` (only in the classification problem) or ``CV_32FC1`` format, one value per input vector. Although, some algorithms, like various flavors of neural nets, take vector responses.
The ``train_data`` must have the ``CV_32FC1`` (32-bit floating-point, single-channel) format. Responses are usually stored in a 1D vector (a row or a column) of ``CV_32SC1`` (only in the classification problem) or ``CV_32FC1`` format, one value per input vector. Although, some algorithms, like various flavors of neural nets, take vector responses.
For classification problems, the responses are discrete class labels. For regression problems, the responses are values of the function to be approximated. Some algorithms can deal only with classification problems, some - only with regression problems, and some can deal with both problems. In the latter case, the type of output variable is either passed as a separate parameter or as the last element of the ``var_type`` vector:
......@@ -177,16 +142,12 @@ For classification problems, the responses are discrete class labels. For regres
Types of input variables can be also specified using ``var_type`` . Most algorithms can handle only ordered input variables.
Many models in the ML may be trained on a selected feature subset, and/or on a selected sample subset of the training set. To make it easier for you, the method ``train`` usually includes the ``var_idx`` and ``sample_idx`` parameters. The former parameter identifies variables (features) of interest, and the latter one identifies samples of interest. Both vectors are either integer ( ``CV_32SC1`` ) vectors (lists of 0-based indices) or 8-bit ( ``CV_8UC1`` ) masks of active variables/samples. You may pass ``NULL`` pointers instead of either of the arguments, meaning that all of the variables/samples are used for training.
Many ML models may be trained on a selected feature subset, and/or on a selected sample subset of the training set. To make it easier for you, the method ``train`` usually includes the ``var_idx`` and ``sample_idx`` parameters. The former parameter identifies variables (features) of interest, and the latter one identifies samples of interest. Both vectors are either integer ( ``CV_32SC1`` ) vectors (lists of 0-based indices) or 8-bit ( ``CV_8UC1`` ) masks of active variables/samples. You may pass ``NULL`` pointers instead of either of the arguments, meaning that all of the variables/samples are used for training.
Additionally, some algorithms can handle missing measurements, that is, when certain features of certain training samples have unknown values (for example, they forgot to measure a temperature of patient A on Monday). The parameter ``missing_mask`` , an 8-bit matrix of the same size as ``train_data`` , is used to mark the missed values (non-zero elements of the mask).
Usually, the previous model state is cleared by ``clear()`` before running the training procedure. However, some algorithms may optionally update the model state with the new training data, instead of resetting it.
.. index:: CvStatModel::predict
.. _CvStatModel::predict:
CvStatModel::predict
--------------------
.. ocv:function:: float CvStatModel::predict( const Mat& sample[, <prediction_params>] ) const
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment