The problem of approximating functions by neural networks using incremental
algorithms is studied. For functions belonging to a rather general class,
characterized by certain smoothness properties with respect to the La norm,
we compute upper bounds on the approximation error where error is measured
by the L-q norm, 1 less than or equal to q less than or equal to infinity.
These results extend previous work? applicable in the case q = 2, and prov
ide an explicit algorithm to achieve the derived approximation error rate.
In the range q less than or equal to 2 near-optimal rates of convergence ar
e demonstrated. A gap remains, however, with respect to a recently establis
hed lower bound in the case q > 2, although the rates achieved are provably
better than those obtained by optimal linear approximation. Extensions of
the results from the L-2 norm to L-p are also discussed, A further interest
ing conclusion from our results is that no loss of generality is suffered u
sing networks with positive hidden-to-output weights. Moreover, explicit bo
unds on the size of the hidden-to-output weights are established, which are
sufficient to guarantee the established convergence rates.