In this paper, we present a stochastic optimization algorithm based on
the idea of the gradient method which incorporates a new adaptive-pre
cision technique. Because of this new technique, unlike recent methods
, the proposed algorithm adaptively selects the precision without any
need for prior knowledge on the speed of convergence of the generated
sequence. With this new technique, the algorithm can avoid increasing
the estimation precision unnecessarily, yet it retains its favorable c
onvergence properties. In fact, it tries to maintain a nice balance be
tween the requirements for computational accuracy and those for comput
ational expediency. Furthermore, we present two types of convergence r
esults delineating under what assumptions what kinds of convergence ca
n be obtained for the proposed algorithm.