We study the performance of a generalized perceptron algorithm for lea
rning realizable dichotomies, with an error-dependent adaptive learnin
g rate. The asymptotic scaling form of the solution to the associated
Markov equations is derived, assuming certain smoothness conditions. W
e show that the system converges to the optimal solution and the gener
alization error asymptotically obeys a universal inverse power law in
the number of examples. The system is capable of escaping from local m
inima and adapts rapidly to shifts in the target function. The general
theory is illustrated for the perceptron and committee machine.