Additive logistic regression: A statistical view of boosting

Citation
J. Friedman et al., Additive logistic regression: A statistical view of boosting, ANN STATIST, 28(2), 2000, pp. 337-374
Citations number
29
Categorie Soggetti
Mathematics
Journal title
ANNALS OF STATISTICS
ISSN journal
00905364 → ACNP
Volume
28
Issue
2
Year of publication
2000
Pages
337 - 374
Database
ISI
SICI code
0090-5364(200004)28:2<337:ALRASV>2.0.ZU;2-C
Abstract
Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algo rithm to reweighted Versions of the training data and then taking a weighte d majority vote of the sequence of classifiers thus produced. For many clas sification algorithms, this simple strategy results in dramatic improvement s in performance. We show that this seemingly mysterious phenomenon can be understood in terms of well-known statistical principles, namely additive m odeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approx imations and show that they exhibit nearly identical results to boosting. D irect multiclass generalizations based on multinomial likelihood are derive d that exhibit performance comparable to other recently proposed multiclass generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, of ten by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, an d can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally, making it more suitable to large-scal e data mining applications.