Rs. Sexton et al., TOWARD GLOBAL OPTIMIZATION OF NEURAL NETWORKS - A COMPARISON OF THE GENETIC ALGORITHM AND BACKPROPAGATION, Decision support systems, 22(2), 1998, pp. 171-185
Citations number
27
Categorie Soggetti
Computer Science Artificial Intelligence","Computer Science Information Systems","Operatione Research & Management Science","Computer Science Artificial Intelligence","Operatione Research & Management Science","Computer Science Information Systems
The recent surge in activity of neural network research in business is
not surprising since the underlying functions controlling business da
ta are generally unknown and the neural network offers a tool that can
approximate the unknown function to any degree of desired accuracy. T
he vast majority of these studies rely on a gradient algorithm, typica
lly a variation of backpropagation, to obtain the parameters (weights)
of the model, The well-known limitations of gradient search technique
s applied to complex nonlinear optimization problems such as artificia
l neural networks have often resulted in inconsistent and unpredictabl
e performance. Many researchers have attempted to address the problems
associated with the training algorithm by imposing constraints on the
search space or by restructuring the architecture of the neural netwo
rk. In this paper we demonstrate that such constraints and restructuri
ng are unnecessary if a sufficiently complex initial architecture and
an appropriate global search algorithm is used. We further show that t
he genetic algorithm cannot only serve as a global search algorithm bu
t by appropriately defining the objective function it can simultaneous
ly achieve a parsimonious architecture. The value of using the genetic
algorithm over backpropagation for neural network optimization is ill
ustrated through a Monte Carlo study which compares each algorithm on
in-sample, interpolation, and extrapolation data for seven test functi
ons. (C) 1998 Elsevier Science B.V.