ON PAC LEARNING OF FUNCTIONS WITH SMOOTHNESS PROPERTIES USING FEEDFORWARD SIGMOIDAL NETWORKS

Citation
Nsv. Rao et Va. Protopopescu, ON PAC LEARNING OF FUNCTIONS WITH SMOOTHNESS PROPERTIES USING FEEDFORWARD SIGMOIDAL NETWORKS, Proceedings of the IEEE, 84(10), 1996, pp. 1562-1569
Citations number
23
Categorie Soggetti
Engineering, Eletrical & Electronic
Journal title
ISSN journal
00189219
Volume
84
Issue
10
Year of publication
1996
Pages
1562 - 1569
Database
ISI
SICI code
0018-9219(1996)84:10<1562:OPLOFW>2.0.ZU;2-Q
Abstract
We consider the problem of learning functions based on finite samples by using feedforward sigmoidal networks. The unknown function f is cho sen from a family that has either bounded modulus of smoothness and/or bounded capacity. The sample is given by (X(1), f(X(1))), (X(2), f(X( 2))), ..., (X(n), f(X(n))), where X(1), X(2), ..., X(n) are independen tly and identically distributed (IID) according to an unknown distribu tion P-X. General results guarantee the existence of a neural network, f(w), that best approximates f in terms of expected error. However, since both f and P-X are unknown, computing f(w) is impossible in gen eral. Suitable neural network probability and approximately correct (P AC) approximations to f(w) can be, in principle, computed based on fi nite sample, but their computation is hard. Instead, we propose to com pute PAC approximations to f(w) based on alternative estimators, name ly: 1) nearest neighbor rule, 2) local averaging, and 3) Nadaraya-Wats on estimators, all computed using the Haar system. We show that given a sufficiently large sample, each of these estimators guarantees a per formance as close as desired to that of f(w). The practical importanc e of this result stems from the fact that, unlike neural networks, the three estimators above are linear-time computable in terms of the sam ple size.