The so-called accelerated convergence is an ingenuous idea to improve the a
symptotic accuracy in stochastic approximation (gradient based) algorithms.
The estimates obtained from the basic algorithm are subjected to a second
round of averaging, which leads to optimal accuracy for estimates of time-i
nvariant parameters. In this contribution, some simple calculations are use
d to get some intuitive insight into these mechanisms. Of particular intere
st is to investigate the properties of accelerated convergence schemes in t
racking situations. It is shown that a second round of averaging leads to t
he recursive least-squares algorithm with a forgetting factor. This also me
ans that in case the true parameters are changing as a random walk, acceler
ated convergence does not, typically, give optimal tracking properties. Cop
yright (C) 2001 John Wiley & Sons, Ltd.