This paper describes the application of regularisation to the training of f
eedforward neural networks, as a means of improving the quality of solution
s obtained. The basic principles of regularisation theory are outlined for
both linear and nonlinear training and then extended to cover a new hybrid
training algorithm for feedforward neural networks recently proposed by the
authors. The concept of functional regularisation is also introduced and d
iscussed in relation to MLP and RBF networks. The tendency for the hybrid t
raining algorithm and many linear optimisation strategies to generate large
magnitude weight solutions when applied to ill-conditioned neural paradigm
s is illustrated graphically and reasoned analytically. While such weight s
olutions do not generally result in poor fits, it is argued that they could
be subject to numerical instability and are therefore undesirable. Using a
n illustrative example it is shown that, as well as being beneficial from a
generalisation perspective, regularisation also provides a means for contr
olling the magnitude of solutions. (C) 2001 Elsevier Science B.V. All right
s reserved.