Boosting is one of the most important recent developments in classification
methodology. Boosting works by sequentially applying a classification algo
rithm to reweighted Versions of the training data and then taking a weighte
d majority vote of the sequence of classifiers thus produced. For many clas
sification algorithms, this simple strategy results in dramatic improvement
s in performance. We show that this seemingly mysterious phenomenon can be
understood in terms of well-known statistical principles, namely additive m
odeling and maximum likelihood. For the two-class problem, boosting can be
viewed as an approximation to additive modeling on the logistic scale using
maximum Bernoulli likelihood as a criterion. We develop more direct approx
imations and show that they exhibit nearly identical results to boosting. D
irect multiclass generalizations based on multinomial likelihood are derive
d that exhibit performance comparable to other recently proposed multiclass
generalizations of boosting in most situations, and far superior in some.
We suggest a minor modification to boosting that can reduce computation, of
ten by factors of 10 to 50. Finally, we apply these insights to produce an
alternative formulation of boosting decision trees. This approach, based on
best-first truncated tree induction, often leads to better performance, an
d can provide interpretable descriptions of the aggregate decision rule. It
is also much faster computationally, making it more suitable to large-scal
e data mining applications.