On the optimality of neural-network approximation using incremental algorithms

Citation
R. Meir et Ve. Maiorov, On the optimality of neural-network approximation using incremental algorithms, IEEE NEURAL, 11(2), 2000, pp. 323-337
Citations number
42
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
IEEE TRANSACTIONS ON NEURAL NETWORKS
ISSN journal
10459227 → ACNP
Volume
11
Issue
2
Year of publication
2000
Pages
323 - 337
Database
ISI
SICI code
1045-9227(200003)11:2<323:OTOONA>2.0.ZU;2-A
Abstract
The problem of approximating functions by neural networks using incremental algorithms is studied. For functions belonging to a rather general class, characterized by certain smoothness properties with respect to the La norm, we compute upper bounds on the approximation error where error is measured by the L-q norm, 1 less than or equal to q less than or equal to infinity. These results extend previous work? applicable in the case q = 2, and prov ide an explicit algorithm to achieve the derived approximation error rate. In the range q less than or equal to 2 near-optimal rates of convergence ar e demonstrated. A gap remains, however, with respect to a recently establis hed lower bound in the case q > 2, although the rates achieved are provably better than those obtained by optimal linear approximation. Extensions of the results from the L-2 norm to L-p are also discussed, A further interest ing conclusion from our results is that no loss of generality is suffered u sing networks with positive hidden-to-output weights. Moreover, explicit bo unds on the size of the hidden-to-output weights are established, which are sufficient to guarantee the established convergence rates.