Gradient-type algorithms commonly employ a scalar step-size, i.e., each ent
ry of the regression vector is multiplied by the same value before updating
the coefficients. More flexibility, however, is obtained when this step-si
ze is of matrix size. It allows not only to individually scaling the entrie
s of the regression vector but rotations and decorrelations are possible as
well due to the choice of the matrix. A well-known example for the use of
a fixed step-size matrix is the Newton-LMS algorithm. For such a fixed step
-size matrix, conditions are well known under which a gradient-type algorit
hm converges. This article, however, presents robustness and convergence co
nditions for a least-mean-square (LMS) algorithm with time-variant matrix s
tep-size. On the example of a channel estimator used in a cellular hand-pho
ne, it is shown that the choice of a particular step-size matrix leads to c
onsiderable improvement over the fixed step-size case. (C) 2000 Elsevier Sc
ience B.V. All rights reserved.