The application of statistical physics to the study of the learning cu
rves of feedforward connectionist networks has to date been concerned
mostly with perceptron-like networks. Recent work has extended the the
ory to networks such as committee machines and parity machines, and an
important direction for current and future research is the extension
of this body of theory to further connectionist networks. In this arti
cle, we use this formalism to investigate the learning curves of gauss
ian radial basis function networks (RBFNs) having fixed basis function
s. (These networks have also been called generalized linear regression
models.) We address the problem of learning linear and nonlinear, rea
lizable and unrealizable, target rules from noise-free training exampl
es using a stochastic training algorithm. Expressions for the generali
zation error, defined as the expected error for a network with a given
set of parameters, are derived for general gaussian RBFNs, for which
all parameters, including centers and spread parameters, are adaptable
. Specializing to the case of RBFNs with fixed basis functions (basis
functions having parameters chosen without reference to the training e
xamples), we then study the learning curves for these networks in the
limit of high temperature.