Dynamical approximation by recurrent neural networks

Citation
M. Garzon et F. Botelho, Dynamical approximation by recurrent neural networks, NEUROCOMPUT, 29(1-3), 1999, pp. 25-46
Citations number
23
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
NEUROCOMPUTING
ISSN journal
09252312 → ACNP
Volume
29
Issue
1-3
Year of publication
1999
Pages
25 - 46
Database
ISI
SICI code
0925-2312(199911)29:1-3<25:DABRNN>2.0.ZU;2-S
Abstract
We examine the approximating power of recurrent networks for dynamical syst ems through an unbounded number of iterations. It is shown that the natural family of recurrent neural networks with saturated linear transfer functio ns and synaptic weight matrices of rank 1 are essentially equivalent to fee dforward neural networks with recurrent layers. Therefore, they inherit the universal approximation property of real-valued functions in one variable in a stronger sense, namely through an unbounded number of iterations and a pproximation guaranteed to be within O(1/n), with n neurons and possibly la teral synapses allowed in the hidden-layer. However, they are not as comple x in their dynamical behavior as systems defined by Turing machines. It is further proved that every continuous dynamical system can be approximated t hrough all iterations, by both finite analog and boolean networks, when one requires approximation of given arbitrary exact orbits of the (perhaps unk nown) map. This result no longer holds when the orbits of the given map are only available as contaminated orbits of the approximant net due to the pr esence of random noise (e.g., due to digital truncations of analog activati ons). Neural nets can nonetheless approximate large families of continuous maps, including chaotic maps and maps sensitive to initial conditions. A pr ecise characterization of what maps can be approximated fault-tolerantly by analog and discrete neural networks for unboundedly many iterations remain s an open problem. (C) 1999 Published by Elsevier Science B.V. All rights r eserved.