DocumentCode :
3146378
Title :
Analysis of arithmetic coding for data compression
Author :
Howard, Paul G. ; Vitter, Jeffrey Scott
Author_Institution :
Dept. of Comput. Sci., Brown Univ., Providence, RI, USA
fYear :
1991
fDate :
8-11 Apr 1991
Firstpage :
3
Lastpage :
12
Abstract :
The authors analyze the amount of compression possible when arithmetic coding is used for text compression in conjunction with various input models. Arithmetic coding, a technique for statistical lossless encoding, can be thought of as a generalization of Huffman coding in which probabilities are not constrained to be integral powers of 2 and code lengths need not be integers. Adaptive codes are proven to be as good as decrementing semi-adaptive codes. The tradeoff between scaling overheads and savings from exploitation of locality of reference is characterised exactly by means of weighted entropy
Keywords :
data compression; encoding; entropy; Huffman coding; adaptive codes; arithmetic coding; code lengths; data compression; decrementing semi-adaptive codes; locality of reference; scaling; statistical lossless encoding; text compression; tradeoff; weighted entropy; Arithmetic; Data compression;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Data Compression Conference, 1991. DCC '91.
Conference_Location :
Snowbird, UT
Print_ISBN :
0-8186-9202-2
Type :
conf
DOI :
10.1109/DCC.1991.213368
Filename :
213368
Link To Document :
بازگشت