The following learning problem is considered, for continuous-time recu
rrent neural networks having sigmoidal activation functions. Given a '
'black box'' representing an unknown system, measurements of output de
rivatives are collected, for a set of randomly generated inputs, and a
network is used to approximate the observed behavior. It is shown tha
t the number of inputs needed for reliable generalization (the sample
complexity of the learning problem) is upper bounded by an expression
that grows polynomially with the dimension of the network and logarith
mically with the number of output derivatives being matched. (C) 1998
Elsevier Science B.V. All rights reserved.