• DocumentCode
    979975
  • Title

    High-Bandwidth Network Memory System Through Virtual Pipelines

  • Author

    Agrawal, Banit ; Sherwood, Timothy

  • Author_Institution
    Comput. Sci. Dept., Univ. of California, Santa Barbara, Santa Barbara, CA, USA
  • Volume
    17
  • Issue
    4
  • fYear
    2009
  • Firstpage
    1029
  • Lastpage
    1041
  • Abstract
    As network bandwidth increases, designing an effective memory system for network processors becomes a significant challenge. The size of the routing tables, the complexity of the packet classification rules, and the amount of packet buffering required all continue to grow at a staggering rate. Simply relying on large, fast SRAMs alone is not likely to be scalable or cost-effective. Instead, trends point to the use of low-cost commodity DRAM devices as a means to deliver the worst-case memory performance that network data-plane algorithms demand. While DRAMs can deliver a great deal of throughput, the problem is that memory banking significantly complicates the worst-case analysis, and specialized algorithms are needed to ensure that specific types of access patterns are conflict-free. We introduce virtually pipelined memory, an architectural technique that efficiently supports high bandwidth, uniform latency memory accesses, and high-confidence throughput even under adversarial conditions. Virtual pipelining provides a simple-to-analyze programming model of a deep pipeline (deterministic latencies) with a completely different physical implementation (a memory system with banks and probabilistic mapping). This allows designers to effectively decouple the analysis of their algorithms and data structures from the analysis of the memory buses and banks. Unlike specialized hardware customized for a specific data-plane algorithm, our system makes no assumption about the memory access patterns. We present a mathematical argument for our system´s ability to provably provide bandwidth with high confidence and demonstrate its functionality and area overhead through a synthesizable design. We further show that, even though our scheme is general purpose to support new applications such as packet reassembly, it outperforms the state-of-the-art in specialized packet buffering architectures.
  • Keywords
    DRAM chips; microprocessor chips; DRAM; high-bandwidth network memory system; network processors; packet buffering; packet reassembly; universal hashing; virtual pipelines; Bank conflicts; DRAM; MTS; VPNM; mean time to stall; memory; memory controller; network; packet buffering; packet reassembly; universal hashing; virtual pipeline;
  • fLanguage
    English
  • Journal_Title
    Networking, IEEE/ACM Transactions on
  • Publisher
    ieee
  • ISSN
    1063-6692
  • Type

    jour

  • DOI
    10.1109/TNET.2008.2008646
  • Filename
    5031902