TOWARD GLOBAL OPTIMIZATION OF NEURAL NETWORKS - A COMPARISON OF THE GENETIC ALGORITHM AND BACKPROPAGATION

Citation
Rs. Sexton et al., TOWARD GLOBAL OPTIMIZATION OF NEURAL NETWORKS - A COMPARISON OF THE GENETIC ALGORITHM AND BACKPROPAGATION, Decision support systems, 22(2), 1998, pp. 171-185
Citations number
27
Categorie Soggetti
Computer Science Artificial Intelligence","Computer Science Information Systems","Operatione Research & Management Science","Computer Science Artificial Intelligence","Operatione Research & Management Science","Computer Science Information Systems
Journal title
ISSN journal
01679236
Volume
22
Issue
2
Year of publication
1998
Pages
171 - 185
Database
ISI
SICI code
0167-9236(1998)22:2<171:TGOONN>2.0.ZU;2-2
Abstract
The recent surge in activity of neural network research in business is not surprising since the underlying functions controlling business da ta are generally unknown and the neural network offers a tool that can approximate the unknown function to any degree of desired accuracy. T he vast majority of these studies rely on a gradient algorithm, typica lly a variation of backpropagation, to obtain the parameters (weights) of the model, The well-known limitations of gradient search technique s applied to complex nonlinear optimization problems such as artificia l neural networks have often resulted in inconsistent and unpredictabl e performance. Many researchers have attempted to address the problems associated with the training algorithm by imposing constraints on the search space or by restructuring the architecture of the neural netwo rk. In this paper we demonstrate that such constraints and restructuri ng are unnecessary if a sufficiently complex initial architecture and an appropriate global search algorithm is used. We further show that t he genetic algorithm cannot only serve as a global search algorithm bu t by appropriately defining the objective function it can simultaneous ly achieve a parsimonious architecture. The value of using the genetic algorithm over backpropagation for neural network optimization is ill ustrated through a Monte Carlo study which compares each algorithm on in-sample, interpolation, and extrapolation data for seven test functi ons. (C) 1998 Elsevier Science B.V.