Complexity of neural network approximation with limited information: A worst case approach

Citation
M. Kon et L. Plaskota, Complexity of neural network approximation with limited information: A worst case approach, J COMPLEX, 17(2), 2001, pp. 345-365
Citations number
17
Categorie Soggetti
Engineering Mathematics
Journal title
JOURNAL OF COMPLEXITY
ISSN journal
0885064X → ACNP
Volume
17
Issue
2
Year of publication
2001
Pages
345 - 365
Database
ISI
SICI code
0885-064X(200106)17:2<345:CONNAW>2.0.ZU;2-0
Abstract
In neural network theory the complexity of constructing networks to approxi mate input-output functions is of interest. We study this in the more gener al context of approximating elements f of a normed space F using partial in formation about f We assume information about f and the size of the network are limited, as is typical in radial basis function networks. We show comp lexity can be essentially split into two independent parts, information eps ilon -complexity and neural epsilon -complexity. We use a worst case settin g, and integrate elements of information-based complexity and nonlinear app roximation. We consider deterministic and/or randomized approximations usin g information possibly corrupted by noise. The results are illustrated by e xamples including approximation by piecewise polynomial neural networks. (C ) 2001 Academic Press.