It is shown here that stability of the stochastic approximation algorithm i
s implied by the asymptotic stability of the origin for an associated ODE.
This in turn implies convergence of the algorithm. Several specific classes
of algorithms are considered as applications. It is found that the results
provide (i) a simpler derivation of known results for reinforcement learni
ng algorithms; (ii) a proof for the first time that a class of asynchronous
stochastic approximation algorithms are convergent without using any a pri
ori assumption of stability; (iii) a proof for the first time that asynchro
nous adaptive critic and Q-learning algorithms are convergent for the avera
ge cost optimal control problem.