**Speaker**: Chien-I Liao

Title: Efficient Margin Maximizing with Boosting

**Date:**December 13th, 2005.

**Abstract**:

AdaBoost produces a linear combination of base hypotheses and predicts

with the sign of this linear combination. It has been observed that the

generalization error of the algorithm continues to improve even after

all examples are classified correctly by the current signed linear

combination, which can be viewed as hyperplane in feature space where

the base hypotheses form the features. The improvement is attributed to

the experimental observation that the distances (margins) of the

examples to the separating hyperplane are increasing even when the

training error is already zero; that is, all examples are on the

correct side of the hyperplane.

We give a new version of AdaBoost, called AdaBoost*, that explicitly

maximizes the minimum margin of the examples up to a given precision.

The algorithm incorporates a current estimate of the achievable margin

into its calculation of the linear coeffecients of the base hypotheses.

The number of base hypotheses needed is essentially the same as the

number needed by a previous AdaBoost related algorithm that required an

explicit estimate of the achievable margin.

Reference: http://www.boosting.org/papers/RaeWar03.pdf

Slides available at: http://cs.nyu.edu/~cil217/ML/ml.htm