Recursive least-squares and accelerated convergence in stochastic approximation schemes

Authors
Citation
L. Ljung, Recursive least-squares and accelerated convergence in stochastic approximation schemes, INT J ADAPT, 15(2), 2001, pp. 169-178
Citations number
7
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING
ISSN journal
08906327 → ACNP
Volume
15
Issue
2
Year of publication
2001
Pages
169 - 178
Database
ISI
SICI code
0890-6327(200103)15:2<169:RLAACI>2.0.ZU;2-9
Abstract
The so-called accelerated convergence is an ingenuous idea to improve the a symptotic accuracy in stochastic approximation (gradient based) algorithms. The estimates obtained from the basic algorithm are subjected to a second round of averaging, which leads to optimal accuracy for estimates of time-i nvariant parameters. In this contribution, some simple calculations are use d to get some intuitive insight into these mechanisms. Of particular intere st is to investigate the properties of accelerated convergence schemes in t racking situations. It is shown that a second round of averaging leads to t he recursive least-squares algorithm with a forgetting factor. This also me ans that in case the true parameters are changing as a random walk, acceler ated convergence does not, typically, give optimal tracking properties. Cop yright (C) 2001 John Wiley & Sons, Ltd.