We present two classes of convergent algorithms for learning continuou
s functions and regressions that are approximated by feedforward netwo
rks. The first class of algorithms, applicable to networks with unknow
n weights located only in the output layer, is obtained by utilizing t
he potential function methods of Aizerman er al. [2]. The second class
, applicable to general feedforward networks, is obtained by utilizing
the classical Robbins-Monro style stochastic approximation methods. C
onditions relating the sample sizes to the error bounds are derived fo
r both classes of algorithms using martingale-type inequalities. For c
oncreteness, the discussion is presented in terms of neural networks,
but the results are applicable to general feedforward networks, in par
ticular to wavelet networks. The algorithms can be directly adapted to
concept learning problems.