State learning and mixing in entropy of hidden Markov processes and the Gilbert-Elliott channel

Citation
Bm. Hochwald et Pr. Jelenkovic, State learning and mixing in entropy of hidden Markov processes and the Gilbert-Elliott channel, IEEE INFO T, 45(1), 1999, pp. 128-138
Citations number
12
Categorie Soggetti
Information Tecnology & Communication Systems
Journal title
IEEE TRANSACTIONS ON INFORMATION THEORY
ISSN journal
00189448 → ACNP
Volume
45
Issue
1
Year of publication
1999
Pages
128 - 138
Database
ISI
SICI code
0018-9448(199901)45:1<128:SLAMIE>2.0.ZU;2-W
Abstract
Hidden Markov processes such as the Gilbert-Elliott channel have an infinit e dependency structure. Therefore, entropy and channel capacity calculation s require knowledge of the infinite past. In practice, such calculations ar e often approximated with a finite past. It is commonly assumed that the ap proximations require an unbounded amount of the past as the memory in the u nderlying Markov chain increases. We show that this is not necessarily true . We derive an exponentially decreasing upper bound on the accuracy of the finite-past approximation that is much tighter than existing upper bounds w hen the Markov chain mixes well. We also derive an exponentially decreasing upper bound that applies when the Markov chain does not mix at all. Our me thods are demonstrated on the Gilbert-Elliott channel, where we prove that a prescribed finite-past accuracy is quickly reached, independently of the Markovian memory. We conclude that the past can be used either to learn the channel state when the memory is high, or wait until the states mix when t he memory is low. Implications for computing and achieving capacity on the Gilbert-Elliott channel are discussed.