In this paper new online adaptive hidden Markov model (HMM) state esti
mation schemes are developed, based on extended least squares (ELS) co
ncepts and recursive prediction error (RPE) methods, The best of the n
ew schemes exploit the idempotent nature of Markov chains and work wit
h a least squares prediction error index, using a posterior estimates,
more suited to Markov models then traditionally used in identificatio
n of linear systems. These new schemes learn the set of N Markov chain
states, and the a posteriori probability of being in each of the stat
es at each time instant, They are designed to achieve the strengths, i
n terms of computational effort and convergence rates, of each of the
two classes of earlier proposed adaptive HMM schemes without the weakn
esses of each in these areas, The computational effort is of order N.
Implementation aspects of the proposed algorithms are discussed, and s
imulation studies are presented to illustrate convergence rates in com
parison to earlier proposed online schemes.