The performance of feedforward neural networks in real applications ca
n often be improved significantly if use is made of a priori informati
on. For interpolation problems this prior knowledge frequently include
s smoothness requirements on the network mapping, and can be imposed b
y the addition to the error function of suitable regularization terms.
The new error function, however, now depends on the derivatives of th
e network mapping, and so the standard backpropagation algorithm canno
t be applied. In this letter, we derive a computationally efficient le
arning algorithm, for a feedforward network of arbitrary topology, whi
ch can be used to minimize such error functions. Networks having a sin
gle hidden layer, for which the learning algorithm simplifies, are tre
ated as a special case.