Convergence rate results are derived for a stochastic optimization pro
blem where a performance measure is minimized with respect to a vector
parameter theta. Assuming that a gradient estimator is available and
that both the bias and the variance of the estimator are (known) funct
ions of the budget devoted to its computation, the gradient estimator
is employed in conjunction with a stochastic approximation (SA) algori
thm. Our interest is to figure out how to allocate the total available
computational budget to the successive SA iterations. The effort is d
evoted to solving the asymptotic version of this problem by finding th
e convergence rate of SA toward the optimizer, first as a function of
the number of iterations and then as a function of the total computati
onal effort. As a result the optimal rate of increase of the computati
onal budget per iteration can be found. Explicit expressions for the c
ase where the computational budget devoted to an iteration is a polyno
mial in the iteration number, and where the bias and variance of the g
radient estimator are polynomials of the computational budget, are der
ived. Applications include the optimization of steady-state simulation
models with likelihood ratio, perturbation analysis, or finite-differ
ence gradient estimators; optimization of infinite-horizon models with
discounting; optimization of functions of several expectations; and s
o on. Several examples are discussed. Our results readily generalize t
o general root-finding problems.