Improving neural network training solutions using regularisation

Citation
S. Mc Loone et G. Irwin, Improving neural network training solutions using regularisation, NEUROCOMPUT, 37, 2001, pp. 71-90
Citations number
33
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
NEUROCOMPUTING
ISSN journal
09252312 → ACNP
Volume
37
Year of publication
2001
Pages
71 - 90
Database
ISI
SICI code
0925-2312(200104)37:<71:INNTSU>2.0.ZU;2-5
Abstract
This paper describes the application of regularisation to the training of f eedforward neural networks, as a means of improving the quality of solution s obtained. The basic principles of regularisation theory are outlined for both linear and nonlinear training and then extended to cover a new hybrid training algorithm for feedforward neural networks recently proposed by the authors. The concept of functional regularisation is also introduced and d iscussed in relation to MLP and RBF networks. The tendency for the hybrid t raining algorithm and many linear optimisation strategies to generate large magnitude weight solutions when applied to ill-conditioned neural paradigm s is illustrated graphically and reasoned analytically. While such weight s olutions do not generally result in poor fits, it is argued that they could be subject to numerical instability and are therefore undesirable. Using a n illustrative example it is shown that, as well as being beneficial from a generalisation perspective, regularisation also provides a means for contr olling the magnitude of solutions. (C) 2001 Elsevier Science B.V. All right s reserved.