• DocumentCode
    110202
  • Title

    Near-Optimal Rates for Limited-Delay Universal Lossy Source Coding

  • Author

    Gyorgy, Andras ; Neu, Gergely

  • Author_Institution
    Machine Learning Res. Group, Comput. & Autom. Res. Inst., Budapest, Hungary
  • Volume
    60
  • Issue
    5
  • fYear
    2014
  • fDate
    May-14
  • Firstpage
    2823
  • Lastpage
    2834
  • Abstract
    We consider the problem of limited-delay lossy coding of individual sequences. Here, the goal is to design (fixed-rate) compression schemes to minimize the normalized expected distortion redundancy relative to a reference class of coding schemes, measured as the difference between the average distortion of the algorithm and that of the best coding scheme in the reference class. In compressing a sequence of length T, the best schemes available in the literature achieve an O(T-1/3) normalized distortion redundancy relative to finite reference classes of limited delay and limited memory, and the same redundancy is achievable, up to logarithmic factors, when the reference class is the set of scalar quantizers. It has also been shown that the distortion redundancy is at least of order 1/√T in the latter case, and the lower bound can easily be extended to sufficiently powerful (possibly finite) reference coding schemes. In this paper, we narrow the gap between the upper and lower bounds, and give a compression scheme whose normalized distortion redundancy is O(√(ln (T)/T) relative to any finite class of reference schemes, only a logarithmic factor larger than the lower bound. The method is based on the recently introduced shrinking dartboard prediction algorithm, a variant of exponentially weighted average prediction. The algorithm is also extended to the problem of joint source-channel coding over a (known) stochastic noisy channel and to the case when side information is also available to the decoder (the Wyner-Ziv setting). The same improvements are obtained for these settings as in the case of a noiseless channel. Our method is also applied to the problem of zero-delay scalar quantization, where O(ln(T)/1/√T normalized distortion redundancy is achieved relative to the (infinite) class of scalar quantizers of a given rate, almost achieving the known lower bound of order 1/√T. The computationally efficient algorithms known fo- scalar quantization and the Wyner-Ziv setting carry over to our (improved) coding schemes presented in this paper.
  • Keywords
    prediction theory; quantisation (signal); sequences; source coding; Wyner-Ziv decoding; fixed-rate compression; joint source-channel coding; limited delay universal lossy source coding; limited memory; near optimal rate source coding; normalized distortion redundancy; normalized expected distortion redundancy; reference class; sequence compression; shrinking dartboard prediction algorithm; stochastic noisy channel; weighted average prediction; zero-delay scalar quantization; Decoding; Delays; Encoding; Prediction algorithms; Prediction methods; Receivers; Redundancy; Distortion redundancy; individual sequences; joint source-channel coding; lossy source coding; sequential coding; switching cost; universal coding;
  • fLanguage
    English
  • Journal_Title
    Information Theory, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    0018-9448
  • Type

    jour

  • DOI
    10.1109/TIT.2014.2307062
  • Filename
    6746197