LEARNING ALGORITHMS FOR FEEDFORWARD NETWORKS BASED ON FINITE SAMPLES

Citation
Nsv. Rao et al., LEARNING ALGORITHMS FOR FEEDFORWARD NETWORKS BASED ON FINITE SAMPLES, IEEE transactions on neural networks, 7(4), 1996, pp. 926-940
Citations number
59
Categorie Soggetti
Computer Application, Chemistry & Engineering","Engineering, Eletrical & Electronic","Computer Science Artificial Intelligence","Computer Science Hardware & Architecture","Computer Science Theory & Methods
ISSN journal
10459227
Volume
7
Issue
4
Year of publication
1996
Pages
926 - 940
Database
ISI
SICI code
1045-9227(1996)7:4<926:LAFFNB>2.0.ZU;2-O
Abstract
We present two classes of convergent algorithms for learning continuou s functions and regressions that are approximated by feedforward netwo rks. The first class of algorithms, applicable to networks with unknow n weights located only in the output layer, is obtained by utilizing t he potential function methods of Aizerman er al. [2]. The second class , applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods. C onditions relating the sample sizes to the error bounds are derived fo r both classes of algorithms using martingale-type inequalities. For c oncreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in par ticular to wavelet networks. The algorithms can be directly adapted to concept learning problems.