We describe a new incremental algorithm for training linear threshold funct
ions: the Relaxed Online Maximum Margin Algorithm, or ROMMA. ROMMA can be v
iewed as an approximation to the algorithm that repeatedly chooses the hype
rplane that classifies previously seen examples correctly with the maximum
margin. It is known that such a maximum-margin hypothesis can be computed b
y minimizing the length of the weight vector subject to a number of linear
constraints. ROMMA works by maintaining a relatively simple relaxation of t
hese constraints that can be efficiently updated. We prove a mistake bound
for ROMMA that is the same as that proved for the perceptron algorithm. Our
analysis implies that the maximum-margin algorithm also satisfies this mis
take bound; this is the first worst-case performance guarantee for this alg
orithm. We describe some experiments using ROMMA and a variant that updates
its hypothesis more aggressively as batch algorithms to recognize handwrit
ten digits. The computational complexity and simplicity of these algorithms
is similar to that of perceptron algorithm, but their generalization is mu
ch better. We show that a batch algorithm based on aggressive ROMMA converg
es to the fixed threshold SVM hypothesis.