Improving the convergence of the backpropagation algorithm using learning rate adaptation methods

Citation
Gd. Magoulas et al., Improving the convergence of the backpropagation algorithm using learning rate adaptation methods, NEURAL COMP, 11(7), 1999, pp. 1769-1796
Citations number
53
Categorie Soggetti
Neurosciences & Behavoir","AI Robotics and Automatic Control
Journal title
NEURAL COMPUTATION
ISSN journal
08997667 → ACNP
Volume
11
Issue
7
Year of publication
1999
Pages
1769 - 1796
Database
ISI
SICI code
0899-7667(19991001)11:7<1769:ITCOTB>2.0.ZU;2-9
Abstract
This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual ada ptive learning rate for each weight and apply the Goldstein/Armijo line sea rch. The learning-rate adaptation is based on descent techniques and estima tes of the local Lipschitz constant that are obtained without additional er ror function and gradient evaluations. The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. Si mulations are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with several popular training met hods.