We examine the approximating power of recurrent networks for dynamical syst
ems through an unbounded number of iterations. It is shown that the natural
family of recurrent neural networks with saturated linear transfer functio
ns and synaptic weight matrices of rank 1 are essentially equivalent to fee
dforward neural networks with recurrent layers. Therefore, they inherit the
universal approximation property of real-valued functions in one variable
in a stronger sense, namely through an unbounded number of iterations and a
pproximation guaranteed to be within O(1/n), with n neurons and possibly la
teral synapses allowed in the hidden-layer. However, they are not as comple
x in their dynamical behavior as systems defined by Turing machines. It is
further proved that every continuous dynamical system can be approximated t
hrough all iterations, by both finite analog and boolean networks, when one
requires approximation of given arbitrary exact orbits of the (perhaps unk
nown) map. This result no longer holds when the orbits of the given map are
only available as contaminated orbits of the approximant net due to the pr
esence of random noise (e.g., due to digital truncations of analog activati
ons). Neural nets can nonetheless approximate large families of continuous
maps, including chaotic maps and maps sensitive to initial conditions. A pr
ecise characterization of what maps can be approximated fault-tolerantly by
analog and discrete neural networks for unboundedly many iterations remain
s an open problem. (C) 1999 Published by Elsevier Science B.V. All rights r
eserved.