EFFICIENT AGNOSTIC LEARNING OF NEURAL NETWORKS WITH BOUNDED FAN-IN

Citation
Ws. Lee et al., EFFICIENT AGNOSTIC LEARNING OF NEURAL NETWORKS WITH BOUNDED FAN-IN, IEEE transactions on information theory, 42(6), 1996, pp. 2118-2132
Citations number
30
Categorie Soggetti
Information Science & Library Science","Engineering, Eletrical & Electronic
ISSN journal
00189448
Volume
42
Issue
6
Year of publication
1996
Part
2
Pages
2118 - 2132
Database
ISI
SICI code
0018-9448(1996)42:6<2118:EALONN>2.0.ZU;2-G
Abstract
We show that the class of two-layer neural networks with bounded fan-i n is efficiently learnable in a realistic extension to the Probably Ap proximately Correct (PAC) learning model, In this model, a joint proba bility distribution is assumed to exist on the observations and the le arner is required to approximate the neural network which minimizes th e expected quadratic error, As special cases, the model allows learnin g real-valued functions with bounded noise, learning probabilistic con cepts, and learning the best approximation to a target function that c annot be well approximated by the neural network, The networks we cons ider have real-valued inputs and outputs, an unlimited number of thres hold hidden units with bounded fan-in, and a bound on the sum of the a bsolute values of the output weights, The number of computation steps of the learning algorithm is bounded by a polynomial in 1/epsilon, 1/d elta, n and B where epsilon is the desired accuracy, delta is the prob ability that the algorithm fails, n is the input dimension, and B is t he bound on both the absolute value of the target (which may be a rand om variable) and the sum of the absolute values of the output weights, In obtaining the result, we also extended some results on iterative a pproximation of functions in the closure of the convex hull of a funct ion class and on the sample complexity of agnostic learning with the q uadratic loss function.