General convergence results for linear discriminant updates

Citation
Aj. Grove et al., General convergence results for linear discriminant updates, MACH LEARN, 43(3), 2001, pp. 173-210
Citations number
32
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
MACHINE LEARNING
ISSN journal
08856125 → ACNP
Volume
43
Issue
3
Year of publication
2001
Pages
173 - 210
Database
ISI
SICI code
0885-6125(200106)43:3<173:GCRFLD>2.0.ZU;2-Y
Abstract
The problem of learning linear-discriminant concepts can be solved by vario us mistake-driven update procedures, including the Winnow family of algorit hms and the well-known Perceptron algorithm. In this paper we define the ge neral class of "quasi-additive" algorithms, which includes Perceptron and W innow as special cases. We give a single proof of convergence that covers a broad subset of algorithms in this class, including both Perceptron and Wi nnow, but also many new algorithms. Our proof hinges on analyzing a generic measure of progress construction that gives insight as to when and how suc h algorithms converge. Our measure of progress construction also permits us to obtain good mistake bounds for individual algorithms. We apply our unified analysis to new alg orithms as well as existing algorithms. When applied to known algorithms, o ur method "automatically" produces close variants of existing proofs (recov ering similar bounds)-thus showing that, in a certain sense, these seemingl y diverse results are fundamentally isomorphic. However, we also demonstrat e that the unifying principles are more broadly applicable, and analyze a n ew class of algorithms that smoothly interpolate between the additive-updat e behavior of Perceptron and the multiplicative-update behavior of Winnow.