Feedforward neural networks with continuous-valued activation function
s have recently emerged as a powerful paradigm for modeling nonlinear
systems. Several classes of such networks have been proved to possess
universal approximation capabilities. Prominent among the advantages c
laimed for such networks are robustness and distributedness of process
ing and representation. However, there has been little direct research
on either issue, particularly the former, and these characteristics o
f neural networks have been accepted mostly on faith, or on the basis
of heuristic arguments. In this paper, we attempt to construct a frame
work within,which these very important issues can be addressed in a co
herent and tractable manner. The focus of the paper is on a particular
ly simple, but instructive, problem: to predict the effect of perturba
tions in internal neuron outputs on the performance of the network as
a whole. This is directly useful in three ways: 1) it gives informatio
n about the network's tolerance of internal perturbations; 2) it can b
e used as a criterion for selecting among multiple network solutions t
o a given modeling problem; and 3) it provides a framework for relatin
g the performance of a network to the performance of its components. O
f these, the third is especially attractive because it can be used as
the basis for a theory of distributed representation and processing in
feedforward networks.