gradient_boosted_trees.rst 12 KB

Gradient Boosted Trees

Gradient Boosted Trees (GBT) is a generalized boosting algorithm introduced by Jerome Friedman: http://www.salfordsystems.com/doc/GreedyFuncApproxSS.pdf . In contrast to the AdaBoost.M1 algorithm, GBT can deal with both multiclass classification and regression problems. Moreover, it can use any differential loss function, some popular ones are implemented. Decision trees (:ocv:class:`CvDTree`) usage as base learners allows to process ordered and categorical variables.

Training the GBT model

Gradient Boosted Trees model represents an ensemble of single regression trees built in a greedy fashion. Training procedure is an iterative process similar to the numerical optimization via the gradient descent method. Summary loss on the training set depends only on the current model predictions for the training samples, in other words \sum^N_{i=1}L(y_i, F(x_i)) \equiv \mathcal{L}(F(x_1), F(x_2), ... , F(x_N)) \equiv \mathcal{L}(F) . And the \mathcal{L}(F) gradient can be computed as follows:

grad(\mathcal{L}(F)) = \left( \dfrac{\partial{L(y_1, F(x_1))}}{\partial{F(x_1)}},
\dfrac{\partial{L(y_2, F(x_2))}}{\partial{F(x_2)}}, ... ,
\dfrac{\partial{L(y_N, F(x_N))}}{\partial{F(x_N)}} \right) .

At every training step, a single regression tree is built to predict an antigradient vector components. Step length is computed corresponding to the loss function and separately for every region determined by the tree leaf. It can be eliminated by changing values of the leaves directly.

See below the main scheme of the training process:

  1. Find the best constant model.
  2. For i in [1,M] :
    1. Compute the antigradient.
    2. Grow a regression tree to predict antigradient components.
    3. Change values in the tree leaves.
    4. Add the tree to the model.

The following loss functions are implemented for regression problems:

  • Squared loss (CvGBTrees::SQUARED_LOSS): L(y,f(x))=\dfrac{1}{2}(y-f(x))^2

  • Absolute loss (CvGBTrees::ABSOLUTE_LOSS): L(y,f(x))=|y-f(x)|

  • Huber loss (CvGBTrees::HUBER_LOSS): L(y,f(x)) = \left\{ \begin{array}{lr} \delta\cdot\left(|y-f(x)|-\dfrac{\delta}{2}\right) & : |y-f(x)|>\delta\\ \dfrac{1}{2}\cdot(y-f(x))^2 & : |y-f(x)|\leq\delta \end{array} \right. ,

    where \delta is the \alpha -quantile estimation of the |y-f(x)| . In the current implementation \alpha=0.2 .

The following loss functions are implemented for classification problems:

  • Deviance or cross-entropy loss (CvGBTrees::DEVIANCE_LOSS): K functions are built, one function for each output class, and L(y,f_1(x),...,f_K(x)) = -\sum^K_{k=0}1(y=k)\ln{p_k(x)} , where p_k(x)=\dfrac{\exp{f_k(x)}}{\sum^K_{i=1}\exp{f_i(x)}} is the estimation of the probability of y=k .

As a result, you get the following model:

f(x) = f_0 + \nu\cdot\sum^M_{i=1}T_i(x) ,

where f_0 is the initial guess (the best constant model) and \nu is a regularization parameter from the interval (0,1] , further called shrinkage.

Predicting with the GBT Model

To get the GBT model prediction, you need to compute the sum of responses of all the trees in the ensemble. For regression problems, it is the answer. For classification problems, the result is \arg\max_{i=1..K}(f_i(x)) .

CvGBTreesParams

GBT training parameters.

The structure contains parameters for each single decision tree in the ensemble, as well as the whole model characteristics. The structure is derived from :ocv:class:`CvDTreeParams` but not all of the decision tree parameters are supported: cross-validation, pruning, and class priorities are not used.

CvGBTreesParams::CvGBTreesParams

By default the following constructor is used:

CvGBTreesParams(CvGBTrees::SQUARED_LOSS, 200, 0.8f, 0.01f, 3, false)
    : CvDTreeParams( 3, 10, 0, false, 10, 0, false, false, 0 )

CvGBTrees

The class implements the Gradient boosted tree model as described in the beginning of this section.

CvGBTrees::CvGBTrees

Default and training constructors.

The constructors follow conventions of :ocv:func:`CvStatModel::CvStatModel`. See :ocv:func:`CvStatModel::train` for parameters descriptions.

CvGBTrees::train

Trains a Gradient boosted tree model.

The first train method follows the common template (see :ocv:func:`CvStatModel::train`). Both tflag values (CV_ROW_SAMPLE, CV_COL_SAMPLE) are supported. trainData must be of the CV_32F type. responses must be a matrix of type CV_32S or CV_32F. In both cases it is converted into the CV_32F matrix inside the training procedure. varIdx and sampleIdx must be a list of indices (CV_32S) or a mask (CV_8U or CV_8S). update is a dummy parameter.

The second form of :ocv:func:`CvGBTrees::train` function uses :ocv:class:`CvMLData` as a data set container. update is still a dummy parameter.

All parameters specific to the GBT model are passed into the training function as a :ocv:class:`CvGBTreesParams` structure.

CvGBTrees::predict

Predicts a response for an input sample.

The method predicts the response corresponding to the given sample (see :ref:`Predicting with GBT`). The result is either the class label or the estimated function value. The :ocv:func:`CvGBTrees::predict` method enables using the parallel version of the GBT model prediction if the OpenCV is built with the TBB library. In this case, predictions of single trees are computed in a parallel fashion.

CvGBTrees::clear

Clears the model.

The function deletes the data set information and all the weak models and sets all internal variables to the initial state. The function is called in :ocv:func:`CvGBTrees::train` and in the destructor.

CvGBTrees::calc_error

Calculates a training or testing error.

If the :ocv:class:`CvMLData` data is used to store the data set, :ocv:func:`CvGBTrees::calc_error` can be used to get a training/testing error easily and (optionally) all predictions on the training/testing set. If the Intel* TBB* library is used, the error is computed in a parallel way, namely, predictions for different samples are computed at the same time. In case of a regression problem, a mean squared error is returned. For classifications, the result is a misclassification error in percent.