Title :
A memory-efficient adaptive Huffman coding algorithm for very large sets of symbols
Author :
Pigeon, Steven ; Bengio, Yoshua
Author_Institution :
Dept. d´´Inf. et de Recherche Oper., Montreal Univ., Que., Canada
fDate :
30 Mar-1 Apr 1998
Abstract :
Summary form only given. The problem of computing the minimum redundancy codes as we observe symbols one by one has received a lot of attention. However, existing algorithms implicitly assumes that either we have a small alphabet or that we have an arbitrary amount of memory at our disposal for the creation of a coding tree. In real life applications one may need to encode symbols coming from a much larger alphabet, for e.g. coding integers. We introduce a new algorithm for adaptive Huffman coding, called algorithm M, that uses space proportional to the number of frequency classes. The algorithm uses a tree with leaves that represent sets of symbols with the same frequency, rather than individual symbols. The code for each symbol is therefore composed of a prefix (specifying the set, or the leaf of the tree) and a suffix (specifying the symbol within the set of same-frequency symbols). The algorithm uses only two operations to remain as close as possible to the optimal: set migration and rebalancing. We analyze the computational complexity of algorithm M, and point to its advantages in terms of low memory complexity and fast decoding. Comparative experiments were performed with algorithm M on the Calgary corpus, with static Huffman coding as well as with another adaptive Huffman coding algorithms, algorithm Λ of Vitter. Experiments show that M performs comparably or better than the other algorithms but requires much less memory. Finally, we present an improved algorithm, M+, for non-stationary data, which models the distribution of the data in a fixed-size window in the data sequence
Keywords :
Huffman codes; adaptive codes; computational complexity; trees (mathematics); Calgary corpus; M+ algorithm; adaptive Huffman coding algorithm; algorithm M; alphabet; coding tree; computational complexity; data distribution; data sequence; experiments; fast decoding; fixed-size window; frequency classes; improved algorithm; leaves; low memory complexity; memory-efficient algorithm; minimum redundancy codes; nonstationary data; prefix; rebalancing; set migration; static Huffman coding; suffix; symbols; Binary codes; Books; Compression algorithms; Decoding; Electrical capacitance tomography; Huffman coding;
Conference_Titel :
Data Compression Conference, 1998. DCC '98. Proceedings
Conference_Location :
Snowbird, UT
Print_ISBN :
0-8186-8406-2
DOI :
10.1109/DCC.1998.672310