An analytic investigation of the average case learning and generalizat
ion properties of radial basis function (RBFs) networks is presented,
utilizing online gradient descent as the learning rule. The analytic m
ethod employed allows both the calculation of generalization error and
the examination of the internal dynamics of the network. The generali
zation error and internal dynamics are then used to examine the role o
f the learning rate and the specialization of the hidden units, which
gives insight into decreasing the time required for training. The real
izable and some over-realizable cases are studied in detail: the phase
of learning in which the hidden units are unspecialized (symmetric ph
ase) and the phase in which asymptotic convergence occurs are analyzed
, and their typical properties found. Finally, simulations are perform
ed that strongly confirm the analytic results.