In this paper, we study a general formulation of linear prediction algorith
ms including a number of known methods as special cases. We describe a conv
ex duality for this class of methods and propose numerical algorithms to so
lve the derived dual learning problem. We show that the dual formulation is
closely related to online learning algorithms. Furthermore, by using this
duality, we show that new learning methods can be obtained. Numerical examp
les will be given to illustrate various aspects of the newly proposed algor
ithms.