REGRESSION MODELING IN BACKPROPAGATION AND PROJECTION PURSUIT LEARNING

Citation
Jn. Hwang et al., REGRESSION MODELING IN BACKPROPAGATION AND PROJECTION PURSUIT LEARNING, IEEE transactions on neural networks, 5(3), 1994, pp. 342-353
Citations number
34
Categorie Soggetti
Computer Application, Chemistry & Engineering","Engineering, Eletrical & Electronic","Computer Science Artificial Intelligence","Computer Science Hardware & Architecture","Computer Science Theory & Methods
ISSN journal
10459227
Volume
5
Issue
3
Year of publication
1994
Pages
342 - 353
Database
ISI
SICI code
1045-9227(1994)5:3<342:RMIBAP>2.0.ZU;2-8
Abstract
We studied and compared two types of connectionist learning methods fo r model-free regression problems in this paper. One is the popular bac kpropagation learning (BPL) well known in the artificial neural networ ks literature; the other is the projection pursuit learning (PPL) emer ged in recent years in the statistical estimation literature. Both the BPL and the PPL are based on projections of the data in directions de termined from interconnection weights. However, unlike the use of fixe d nonlinear activations (usually sigmoidal) for the hidden neurons in BPL, the PPL systematically approximates the unknown nonlinear activat ions. Moreover, the BPL estimates all the weights simultaneously at ea ch iteration, while the PPL estimates the weights cyclically (neuron-b y-neuron and layer-by-layer) at each iteration. Although the BPL and t he PPL have comparable training speed when based on a Gauss-Newton opt imization algorithm, the PPL proves more parsimonious in that the PPL requires a fewer hidden neurons to approximate the true function. To f urther improve the statistical performance of the PPL, an orthogonal p olynomial approximation is used in place of the supersmoother method o riginally proposed for nonlinear activation approximation in the PPL.