Convergence of exponentiated gradient algorithms

Citation
Si. Hill et Rc. Williamson, Convergence of exponentiated gradient algorithms, IEEE SIGNAL, 49(6), 2001, pp. 1208-1215
Citations number
13
Categorie Soggetti
Eletrical & Eletronics Engineeing
Journal title
IEEE TRANSACTIONS ON SIGNAL PROCESSING
ISSN journal
1053587X → ACNP
Volume
49
Issue
6
Year of publication
2001
Pages
1208 - 1215
Database
ISI
SICI code
1053-587X(200106)49:6<1208:COEGA>2.0.ZU;2-1
Abstract
This paper studies three related algorithms: the (traditional) gradient des cent (GD) algorithm, the exponentiated gradient algorithm with positive and negative weights (EG +/- algorithm), and the exponentiated gradient algori thm with unnormalized positive and negative weights (EGU +/- algorithm). Th ese algorithms have been previously analyzed using the "mistake-bound frame work" in the computational learning theory community, In this paper, we per form a traditional signal processing analysis in terms of the mean square e rror. A relationship between the learning rate and the mean squared error (MSE) o f predictions is found for the family of algorithms. This is used to compar e the performance of the algorithms by choosing learning rates such that th ey converge to the same steady-state MSE, We demonstrate that if the target weight vector is sparse, the EG +/- algorithm typically converges more qui ckly than the GD or EGU +/- algorithms that perform very similarly. A side effect of our analysis is a reparametrization of the algorithms that provid es insights into their behavior, The general form of the results we obtain are consistent with those obtained in the mistake-bound framework. The application of the algorithms to acoustic echo cancellation is then stu died, and it is shown in some circumstances that the EG +/- algorithm will converge faster than the other two algorithms.