Author_Institution :
Dept. of Comput. Sci., Univ. of Wyoming, Laramie, WY, USA
Abstract :
This paper presents a performance modeling and optimization analysis tool to predict and optimize the performance of sparse matrix-vector multiplication (SpMV) on GPUs. We make the following contributions: 1) We present an integrated analytical and profile-based performance modeling to accurately predict the kernel execution times of CSR, ELL, COO, and HYB SpMV kernels. Our proposed approach is general, and neither limited by GPU programming languages nor restricted to specific GPU architectures. In this paper, we use CUDA-based SpMV kernels and NVIDIA Tesla C2050 for our performance modeling and experiments. According to our experiments, for 77 out of 82 test cases, the performance differences between the predicted and measured execution times are less than 9 percent; for the rest five test cases, the differences are between 9 and 10 percent. For CSR, ELL, COO, and HYB SpMV CUDA kernels, the average differences are 6.3, 4.4, 2.2, and 4.7 percent, respectively. 2) Based on the performance modeling, we design a dynamic-programming based SpMV optimal solution auto-selection algorithm to automatically report an optimal solution (i.e., optimal storage strategy, storage format(s), and execution time) for a target sparse matrix. In our experiments, the average performance improvements of the optimal solutions are 41.1, 49.8, and 37.9 percent, compared to NVIDIA´s CSR, COO, and HYB CUDA kernels, respectively.
Keywords :
graphics processing units; parallel architectures; performance evaluation; sparse matrices; COO; CSR; CUDA-based SpMV kernels; ELL; GPU architectures; GPU programming languages; HYB SpMV kernels; NVIDIA Tesla C2050; kernel execution times; optimization analysis tool; performance modeling; sparse matrix-vector multiplication; Analytical models; Benchmark testing; Computational modeling; Graphics processing units; Kernel; Sparse matrices; Strips; CUDA; GPU; Performance modeling; sparse matrix-vector multiplication;