CONVERGENCE MODELS FOR ROSENBLATTS PERCEPTRON LEARNING ALGORITHM

Citation
Sn. Diggavi et al., CONVERGENCE MODELS FOR ROSENBLATTS PERCEPTRON LEARNING ALGORITHM, IEEE transactions on signal processing, 43(7), 1995, pp. 1696-1702
Citations number
14
Categorie Soggetti
Engineering, Eletrical & Electronic
ISSN journal
1053587X
Volume
43
Issue
7
Year of publication
1995
Pages
1696 - 1702
Database
ISI
SICI code
1053-587X(1995)43:7<1696:CMFRPL>2.0.ZU;2-6
Abstract
In this paper, we present a stochastic analysis of the steady-state an d transient convergence properties of a single-layer perceptron for fa st learning (large step-size, input-power product). The training data are modeled using a system identification formulation with zero-mean G aussian inputs, The perceptron weights are adjusted by a learning algo rithm equivalent to Rosenblatt's perceptron convergence procedure, It is shown that the convergence points of the algorithm depend on the st ep size mu and the input signal power (variance) sigma(x)(2), and that the algorithm is stable essentially for all mu > 0. Two coupled nonli near recursions are derived that accurately model the transient behavi or of the algorithm, We also examine how these convergence results are affected by noisy perceptron input vectors, Computer simulations are presented to verify the analytical models.