We present two new error bounds for optimization problems over a convex set
whose objective function f is either semianalytic or gamma-strictly convex
, with gamma greater than or equal to 1. We then apply these error bounds t
o analyze the rate of convergence of a wide class of iterative descent algo
rithms for the aforementioned optimization problem. Our analysis shows that
the function sequence {f(x(k))} converges at least at the sublinear rate o
f k(-epsilon) for some positive constant epsilon, where k is the iteration
index. Moreover, the distances from the iterate sequence {x(k)} to the set
of stationary points of the optimization problem converge to zero at least
sublinearly.