Initial state training procedure improves dynamic recurrent networks with time-dependent weights

Citation
L. Leistritz et al., Initial state training procedure improves dynamic recurrent networks with time-dependent weights, IEEE NEURAL, 12(6), 2001, pp. 1513-1518
Citations number
23
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
IEEE TRANSACTIONS ON NEURAL NETWORKS
ISSN journal
10459227 → ACNP
Volume
12
Issue
6
Year of publication
2001
Pages
1513 - 1518
Database
ISI
SICI code
1045-9227(200111)12:6<1513:ISTPID>2.0.ZU;2-P
Abstract
The problem of learning multiple continuous trajectories by means of recurr ent neural networks with (in general) time-varying weights is addressed in this study. The learning process is transformed into an optimal control fra mework where both the weights and the initial network state to be found are treated as controls. For such a task, a new learning algorithm is proposed which is based on a variational formulation of Pontryagin's maximum princi ple. The convergence of this algorithm, under reasonable assumptions, is al so investigated. Numerical examples of learning nontrivial two-class proble ms are presented which demonstrate the efficiency of the approach proposed.