Jn. Hwang et al., REGRESSION MODELING IN BACKPROPAGATION AND PROJECTION PURSUIT LEARNING, IEEE transactions on neural networks, 5(3), 1994, pp. 342-353
We studied and compared two types of connectionist learning methods fo
r model-free regression problems in this paper. One is the popular bac
kpropagation learning (BPL) well known in the artificial neural networ
ks literature; the other is the projection pursuit learning (PPL) emer
ged in recent years in the statistical estimation literature. Both the
BPL and the PPL are based on projections of the data in directions de
termined from interconnection weights. However, unlike the use of fixe
d nonlinear activations (usually sigmoidal) for the hidden neurons in
BPL, the PPL systematically approximates the unknown nonlinear activat
ions. Moreover, the BPL estimates all the weights simultaneously at ea
ch iteration, while the PPL estimates the weights cyclically (neuron-b
y-neuron and layer-by-layer) at each iteration. Although the BPL and t
he PPL have comparable training speed when based on a Gauss-Newton opt
imization algorithm, the PPL proves more parsimonious in that the PPL
requires a fewer hidden neurons to approximate the true function. To f
urther improve the statistical performance of the PPL, an orthogonal p
olynomial approximation is used in place of the supersmoother method o
riginally proposed for nonlinear activation approximation in the PPL.