Sn. Diggavi et al., CONVERGENCE MODELS FOR ROSENBLATTS PERCEPTRON LEARNING ALGORITHM, IEEE transactions on signal processing, 43(7), 1995, pp. 1696-1702
In this paper, we present a stochastic analysis of the steady-state an
d transient convergence properties of a single-layer perceptron for fa
st learning (large step-size, input-power product). The training data
are modeled using a system identification formulation with zero-mean G
aussian inputs, The perceptron weights are adjusted by a learning algo
rithm equivalent to Rosenblatt's perceptron convergence procedure, It
is shown that the convergence points of the algorithm depend on the st
ep size mu and the input signal power (variance) sigma(x)(2), and that
the algorithm is stable essentially for all mu > 0. Two coupled nonli
near recursions are derived that accurately model the transient behavi
or of the algorithm, We also examine how these convergence results are
affected by noisy perceptron input vectors, Computer simulations are
presented to verify the analytical models.