• DocumentCode
    3239702
  • Title

    Performance Optimizations and Bounds for Sparse Matrix-Vector Multiply

  • Author

    Vuduc, Richard ; Demmel, James W. ; Yelick, Katherine A. ; Kamil, Shoaib ; Nishtala, Rajesh ; Lee, Benjamin

  • Author_Institution
    University of California at Berkeley
  • fYear
    2002
  • fDate
    16-22 Nov. 2002
  • Firstpage
    26
  • Lastpage
    26
  • Abstract
    We consider performance tuning, by code and data structure reorganization, of sparse matrix-vector multiply (SpM×V), one of the most important computational kernels in scientific applications. This paper addresses the fundamental questions of what limits exist on such performance tuning, and how closely tuned code approaches these limits. Specifically, we develop upper and lower bounds on the performance (Mflop/s) of SpM×V when tuned using our previously proposed register blocking optimization. These bounds are based on the non-zero pattern in the matrix and the cost of basic memory operations, such as cache hits and misses. We evaluate our tuned implementations with respect to these bounds using hardware counter data on 4 different platforms and on test set of 44 sparse matrices. We find that we can often get within 20% of the upper bound, particularly on class of matrices from finite element modeling (FEM) problems; on non-FEM matrices, performance improvements of 2× are still possible. Lastly, we present new heuristic that selects optimal or near-optimal register block sizes (the key tuning parameters) more accurately than our previous heuristic. Using the new heuristic, we show improvements in SpM×V performance (Mflop/s) by as much as 2.5× over an untuned implementation. Collectively, our results suggest that future performance improvements, beyond those that we have already demonstrated for SpM×V, will come from two sources: (1) consideration of higher-level matrix structures (e.g. exploiting symmetry, matrix reordering, multiple register block sizes), and (2) optimizing kernels with more opportunity for data reuse (e.g. sparse matrix-multiple vector multiply, multiplication of AT A by a vector).
  • Keywords
    Costs; Counting circuits; Data structures; Hardware; Kernel; Optimization; Registers; Sparse matrices; Testing; Upper bound;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Supercomputing, ACM/IEEE 2002 Conference
  • ISSN
    1063-9535
  • Print_ISBN
    0-7695-1524-X
  • Type

    conf

  • DOI
    10.1109/SC.2002.10025
  • Filename
    1592862