Bounding the generalization error of convex combinations of classifiers: balancing the dimensionality and the margins

Citation
Koltchinskii, Vladimir et al., Bounding the generalization error of convex combinations of classifiers: balancing the dimensionality and the margins, Annals of applied probability , 13(1), 2003, pp. 213-252
ISSN journal
10505164
Volume
13
Issue
1
Year of publication
2003
Pages
213 - 252
Database
ACNP
SICI code
Abstract
A problem of bounding the generalization error of a classifier %\break f.\conv(H), where H is a "base" class of functions (classifiers), is considered. This problem frequently occurs in computer learning, where efficient algorithms that combine simple classifiers into a complex one (such as boosting and bagging) have attracted a lot of attention. Using Talagrand's concentration inequalities for empirical processes, we obtain new sharper bounds on the generalization error of combined classifiers that take into account both the empirical distribution of "classification margins" and an "approximate dimension" of the classifiers, and study the performance of these bounds in several experiments with learning algorithms.