• DocumentCode
    2839799
  • Title

    Scaling Down Off-the-Shelf Data Compression: Backwards-Compatible Fine-Grain Mixing

  • Author

    Gray, Michael ; Peterson, Peter ; Reiher, Peter

  • Author_Institution
    Comput. Sci. Dept., Univ. of California Los Angeles, Los Angeles, CA, USA
  • fYear
    2012
  • fDate
    18-21 June 2012
  • Firstpage
    112
  • Lastpage
    121
  • Abstract
    Pu and Singaravelu presented Fine-Grain Mixing, an adaptive compression system which aimed to maximize CPU and network utilization simultaneously by splitting a network stream into a mixture of compressed and uncompressed blocks. Blocks were compressed opportunistically in a send buffer, they compressed as many blocks as they could without becoming a bottleneck. They successfully utilized all available CPU and network bandwidth even on high speed connections. In addition, they noted much greater throughput than previous adaptive compression systems. Here, we take a different view of FG-Mixing than was taken by Pu and Singaravelu and give another explanation for its high performance: that fine-grain mixing of compressed and uncompressed blocks enables off-the-shelf compressors to scale down their degree of compression linearly with decreasing CPU usage. Exploring the scaling behavior in-depth allows us to make a variety of improvements to fine-grain mixed compression: better compression ratios for a given level of CPU consumption, a wider range of data reduction and CPU cost options, and parallelized compression to take advantage of multi-core CPUs. We make full compatibility with the ubiquitous deflate decompress or (as used in many network protocols directly, or as the back-end of the gzip and Zip formats) a primary goal, rather than using a special, incompatible protocol as in the original implementation of FG-Mixing. Moreover, we show that the benefits of fine-grain mixing are retained by our compatible version.
  • Keywords
    data compression; data reduction; CPU consumption; CPU cost option; CPU utilization maximization; adaptive compression system; backwards-compatible fine-grain mixing; data reduction; fine-grain mixed compression; multicore CPU; network bandwidth; network utilization maximization; off-the-shelf compressor; off-the-shelf data compression; parallelized compression; scaling behavior; Bandwidth; Compressors; History; Image coding; Standards; Switches; Throughput;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Distributed Computing Systems (ICDCS), 2012 IEEE 32nd International Conference on
  • Conference_Location
    Macau
  • ISSN
    1063-6927
  • Print_ISBN
    978-1-4577-0295-2
  • Type

    conf

  • DOI
    10.1109/ICDCS.2012.21
  • Filename
    6257984