INCREMENTAL GRADIENT ALGORITHMS WITH STEPSIZES BOUNDED AWAY FROM ZERO

Authors
Citation
Mv. Solodov, INCREMENTAL GRADIENT ALGORITHMS WITH STEPSIZES BOUNDED AWAY FROM ZERO, Computational Optimization and Applications, 11(1), 1998, pp. 23-35
Citations number
22
Categorie Soggetti
Operatione Research & Management Science",Mathematics,"Operatione Research & Management Science",Mathematics
ISSN journal
09266003
Volume
11
Issue
1
Year of publication
1998
Pages
23 - 35
Database
ISI
SICI code
0926-6003(1998)11:1<23:IGAWSB>2.0.ZU;2-R
Abstract
We consider the class of incremental gradient methods for minimizing a sum of continuously differentiable functions. An important novel feat ure of our analysis is that the stepsizes are kept bounded away from z ero. We derive the first convergence results of any kind for this comp utationally important case. In particular, we show that a certain epsi lon-approximate solution can be obtained and establish the linear depe ndence of epsilon on the stepsize limit. incremental gradient methods are particularly well-suited for large neural network training problem s where obtaining an approximate solution is typically sufficient and is often preferable to computing an exact solution. Thus, in the conte xt of neural networks, the approach presented here is related to the p rinciple of tolerant training. Our results justify numerous stepsize r ules that were derived on the basis of extensive numerical experimenta tion but for which no theoretical analysis was previously available. I n addition, convergence to (exact) stationary points is established wh en the gradient satisfies a certain growth property.