Recently, a fast version of LMS algorithm where only a small subset of the
coefficients is updated at each iteration has been published in the literat
ure. In this brief, we analyze the effects of this technique on the discret
e cosine transform domain LMS (DCTLMS) algorithm, and highlight Its advanta
ges and drawbacks. It will be shown, in particular, that a reduction in the
computational complexity can be achieved without causing any degradation t
o the steady state error of the algorithm. The analytical results are then
confirmed by simulations where real speech and first-order Markov signals a
re used.