DocumentCode :
3861384
Title :
State learning and mixing in entropy of hidden Markov processes and the Gilbert-Elliott channel
Author :
B.M. Hochwald;P.R. Jelenkovic
Author_Institution :
Lucent Technol., AT&T Bell Labs., Murray Hill, NJ, USA
Volume :
45
Issue :
1
fYear :
1999
Firstpage :
128
Lastpage :
138
Abstract :
Hidden Markov processes such as the Gilbert-Elliott (1960) channel have an infinite dependency structure. Therefore, entropy and channel capacity calculations require knowledge of the infinite past. In practice, such calculations are often approximated with a finite past. It is commonly assumed that the approximations require an unbounded amount of the past as the memory in the underlying Markov chain increases. We show that this is not necessarily true. We derive an exponentially decreasing upper bound on the accuracy of the finite-past approximation that is much tighter than existing upper hounds when the Markov chain mixes well. We also derive an exponentially decreasing upper bound that applies when the Markov chain does not mix at all. Our methods are demonstrated on the Gilbert-Elliott channel, where we prove that a prescribed finite-past accuracy is quickly reached, independently of the Markovian memory. We conclude that the past can be used either to learn the channel state when the memory is high, or wait until the states mix when the memory is low. Implications fur computing and achieving capacity on the Gilbert-Elliott channel are discussed.
Keywords :
"Entropy","Hidden Markov models","Upper bound","Markov processes","Fading","Associate members","Channel capacity","Speech processing","Image recognition","Speech recognition"
Journal_Title :
IEEE Transactions on Information Theory
Publisher :
ieee
ISSN :
0018-9448
Type :
jour
DOI :
10.1109/18.746777
Filename :
746777
Link To Document :
بازگشت