Stable dynamic backpropagation learning in recurrent neural networks

Authors
Citation
La. Jin et Mm. Gupta, Stable dynamic backpropagation learning in recurrent neural networks, IEEE NEURAL, 10(6), 1999, pp. 1321-1334
Citations number
42
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
IEEE TRANSACTIONS ON NEURAL NETWORKS
ISSN journal
10459227 → ACNP
Volume
10
Issue
6
Year of publication
1999
Pages
1321 - 1334
Database
ISI
SICI code
1045-9227(199911)10:6<1321:SDBLIR>2.0.ZU;2-E
Abstract
The conventional dynamic backpropagation (DBP) algorithm proposed by Pineda does not necessarily imply the stability of the dynamic neural model in th e sense of Lyapunov during a dynamic weight learning process. A difficulty with the DBP learning process is thus associated with the stability of the equilibrium points which have to be checked by simulating the set of dynami c equations, or else by verifying the stability conditions, after the learn ing has been completed. To avoid unstable phenomenon during the learning pr ocess, two new learning schemes, called the multiplier and constrained lear ning rate algorithms, are proposed in this paper to provide stable adaptive updating processes for both the synaptic and somatic parameters of the net work. Based on the explicit stability conditions, in the multiplier method these conditions are introduced into the iterative error index, and the new updating formulations contain a set of inequality constraints. In the cons trained learning rate algorithm, the learning rate is updated at each itera tive instant by an equation derived using the stability conditions, With th ese stable DBP algorithms, any analog target pattern may be implemented by a steady output vector which is a nonlinear vector function of the stable e quilibrium point. The applicability of the approaches presented is illustra ted through both analog and binary pattern storage examples.