The issue of variable stepsize in the backpropagation training algorit
hm has been widely investigated and several techniques employing heuri
stic factors have been suggested to improve training time and reduce c
onvergence to local minima. In this contribution, backpropagation trai
ning is based on a modified steepest descent method which allows varia
ble stepsize. It is computationally efficient and possesses interestin
g convergence properties utilizing estimates of the Lipschitz constant
without any additional computational cost. The algorithm has been imp
lemented and tested on several problems and the results have been very
satisfactory. Numerical evidence shows that the method is robust with
good average performance on many classes of problems. Copyright (C) 1
996 Elsevier Science Ltd.