We consider the gradient method x(t+1) = x(t) + gamma(t)(s(t) + w(t)), wher
e s(t) is a descent direction of a function f : R-n --> R and w(t) is a det
erministic or stochastic error. We assume that del f is Lipschitz continuou
s, that the stepsize gamma(t) diminishes to 0, and that s(t) and w(t) satis
fy standard conditions. We show that either f(x(t)) --> -infinity or f(x(t)
) converges to a finite value and del f(x(t)) --> 0 (with probability 1 in
the stochastic case), and in doing so, we remove various boundedness condit
ions that are assumed in existing results, such as boundedness from below o
f f, boundedness of del f(x(t)), or boundedness of x(t).