Gradient convergence in gradient methods with errors

Citation
Dp. Bertsekas et Jn. Tsitsiklis, Gradient convergence in gradient methods with errors, SIAM J OPTI, 10(3), 2000, pp. 627-642
Citations number
19
Categorie Soggetti
Mathematics
Journal title
SIAM JOURNAL ON OPTIMIZATION
ISSN journal
10526234 → ACNP
Volume
10
Issue
3
Year of publication
2000
Pages
627 - 642
Database
ISI
SICI code
1052-6234(20000606)10:3<627:GCIGMW>2.0.ZU;2-H
Abstract
We consider the gradient method x(t+1) = x(t) + gamma(t)(s(t) + w(t)), wher e s(t) is a descent direction of a function f : R-n --> R and w(t) is a det erministic or stochastic error. We assume that del f is Lipschitz continuou s, that the stepsize gamma(t) diminishes to 0, and that s(t) and w(t) satis fy standard conditions. We show that either f(x(t)) --> -infinity or f(x(t) ) converges to a finite value and del f(x(t)) --> 0 (with probability 1 in the stochastic case), and in doing so, we remove various boundedness condit ions that are assumed in existing results, such as boundedness from below o f f, boundedness of del f(x(t)), or boundedness of x(t).