AVERAGE-CASE LEARNING-CURVES FOR RADIAL BASIS FUNCTION NETWORKS

Citation
Sb. Holden et M. Niranjan, AVERAGE-CASE LEARNING-CURVES FOR RADIAL BASIS FUNCTION NETWORKS, Neural computation, 9(2), 1997, pp. 441-460
Citations number
24
Categorie Soggetti
Computer Sciences","Computer Science Artificial Intelligence",Neurosciences
Journal title
ISSN journal
08997667
Volume
9
Issue
2
Year of publication
1997
Pages
441 - 460
Database
ISI
SICI code
0899-7667(1997)9:2<441:ALFRBF>2.0.ZU;2-5
Abstract
The application of statistical physics to the study of the learning cu rves of feedforward connectionist networks has to date been concerned mostly with perceptron-like networks. Recent work has extended the the ory to networks such as committee machines and parity machines, and an important direction for current and future research is the extension of this body of theory to further connectionist networks. In this arti cle, we use this formalism to investigate the learning curves of gauss ian radial basis function networks (RBFNs) having fixed basis function s. (These networks have also been called generalized linear regression models.) We address the problem of learning linear and nonlinear, rea lizable and unrealizable, target rules from noise-free training exampl es using a stochastic training algorithm. Expressions for the generali zation error, defined as the expected error for a network with a given set of parameters, are derived for general gaussian RBFNs, for which all parameters, including centers and spread parameters, are adaptable . Specializing to the case of RBFNs with fixed basis functions (basis functions having parameters chosen without reference to the training e xamples), we then study the learning curves for these networks in the limit of high temperature.