An important issue in the design and implementation of a neural network is
the sensitivity of its output to input and weight perturbations. In this pa
per, we discuss the sensitivity of the most popular and general feedforward
neural networks-multilayer perceptron (MLP). The sensitivity is defined as
the mathematical expectation of the output errors of the MLP due to input
and weight perturbations with respect to all input and weight values in a g
iven continuous interval. The sensitivity for a single neuron is discussed
first and an analytical expression that is a function of the absolute value
s of input and weight perturbations is approximately derived. Then an algor
ithm is given to compute the sensitivity for the entire MLP. As intuitively
expected, the sensitivity increases with input and weight perturbations, b
ut the increase has an upper bound that is determined by the structural con
figuration of the MLP, namely the number of neurons per layer and the numbe
r of layers. There exists an optimal value for the number of neurons in a l
ayer, which yields the highest sensitivity value. The effect caused by the
number of layers is quite unexpected. The sensitivity of a neural network m
ay decrease at first and then almost keeps constant while the number increa
ses.