The problem of learning linear-discriminant concepts can be solved by vario
us mistake-driven update procedures, including the Winnow family of algorit
hms and the well-known Perceptron algorithm. In this paper we define the ge
neral class of "quasi-additive" algorithms, which includes Perceptron and W
innow as special cases. We give a single proof of convergence that covers a
broad subset of algorithms in this class, including both Perceptron and Wi
nnow, but also many new algorithms. Our proof hinges on analyzing a generic
measure of progress construction that gives insight as to when and how suc
h algorithms converge.
Our measure of progress construction also permits us to obtain good mistake
bounds for individual algorithms. We apply our unified analysis to new alg
orithms as well as existing algorithms. When applied to known algorithms, o
ur method "automatically" produces close variants of existing proofs (recov
ering similar bounds)-thus showing that, in a certain sense, these seemingl
y diverse results are fundamentally isomorphic. However, we also demonstrat
e that the unifying principles are more broadly applicable, and analyze a n
ew class of algorithms that smoothly interpolate between the additive-updat
e behavior of Perceptron and the multiplicative-update behavior of Winnow.