Gd. Magoulas et al., Improving the convergence of the backpropagation algorithm using learning rate adaptation methods, NEURAL COMP, 11(7), 1999, pp. 1769-1796
Citations number
53
Categorie Soggetti
Neurosciences & Behavoir","AI Robotics and Automatic Control
This article focuses on gradient-based backpropagation algorithms that use
either a common adaptive learning rate for all weights or an individual ada
ptive learning rate for each weight and apply the Goldstein/Armijo line sea
rch. The learning-rate adaptation is based on descent techniques and estima
tes of the local Lipschitz constant that are obtained without additional er
ror function and gradient evaluations. The proposed algorithms improve the
backpropagation training in terms of both convergence rate and convergence
characteristics, such as stable learning and robustness to oscillations. Si
mulations are conducted to compare and evaluate the convergence behavior of
these gradient-based training algorithms with several popular training met
hods.