LEARNING AND GENERALIZATION IN RADIAL BASIS FUNCTION NETWORKS

Citation
Jas. Freeman et D. Saad, LEARNING AND GENERALIZATION IN RADIAL BASIS FUNCTION NETWORKS, Neural computation, 7(5), 1995, pp. 1000-1020
Citations number
19
Categorie Soggetti
Computer Sciences","Computer Science Artificial Intelligence",Neurosciences
Journal title
ISSN journal
08997667
Volume
7
Issue
5
Year of publication
1995
Pages
1000 - 1020
Database
ISI
SICI code
0899-7667(1995)7:5<1000:LAGIRB>2.0.ZU;2-A
Abstract
The two-layer radial basis function network, with fixed centers of the basis functions, is analyzed within a stochastic training paradigm. V arious definitions of generalization error are considered, and two suc h definitions are employed in deriving generic learning curves and gen eralization properties, both with and without a weight decay term. The generalization error is shown analytically to be related to the evide nce and, via the evidence, to the prediction error and free energy. Th e generalization behavior is explored; the generic learning curve is f ound to be inversely proportional to the number of training pairs pres ented. Optimization of training is considered by minimizing the genera lization error with respect to the free parameters of the training alg orithms. Finally, the effect of the joint activations between hidden-l ayer units is examined and shown to speed training.