Sb. Holden et Pjw. Rayner, GENERALIZATION AND PAC LEARNING - SOME NEW RESULTS FOR THE CLASS OF GENERALIZED SINGLE-LAYER NETWORKS, IEEE transactions on neural networks, 6(2), 1995, pp. 368-380
The ability of connectionist networks to generalize is often cited as
one of their most important properties. We analyze the generalization
ability of the class of generalized singlelayer networks (GSLN's), whi
ch includes Volterra networks, radial basis function networks, regular
ization networks, and the modified Kanerva model, using techniques bas
ed on the theory of probably approximately correct (PAC) learning whic
h have previously been used to analyze the generalization ability of f
eedforward networks of linear threshold elements (LTE's). An introduct
ion to the relevant computational learning theory is included. We deri
ve necessary and sufficient conditions on the number of training examp
les required by a GSLN to guarantee a particular generalization perfor
mance. We compare our results to those given previously for feedforwar
d networks of LTE's and show that, on the basis of the currently avail
able bounds, the sufficient number of training examples for GSLN's wil
l typically be considerably less than for feedforward networks of LTE'
s with the same number of weights. We show that the use of self-struct
uring techniques for GSLN's may reduce the number of training examples
sufficient to guarantee good generalization performance, and we provi
de an explanation for the fact that GSLN's can require a relatively la
rge number of weights.