Title :
On the Memory Wall and Performance of Symmetric Sparse Matrix Vector Multiplications In Different Data Structures on Shared Memory Machines
Author :
Tongxiang Gu;Xingping Liu;Zeyao Mo;Xiaowen Xu;Shengxin Zhu
Author_Institution :
Lab. of Comput. Phys., Inst. of Appl. Phys. &
Abstract :
Sparse matrix vector multiplications (SpMVs) are typical sparse operations which have a high ratio of memory reference volume to computations. According to the roof-line model, the performance of such operations is limited by the memory bandwidth on shared memory machine. A careful design of a data structure can improve the performance of such sparse memory intensive operations. By comparing the performance of symmetric SpMVs in three different data structures, the paper shows that a packed compressed data structure for symmetric sparse matrices significantly improves the performance of symmetric sparse matrix vector multiplication on shared memory machine. A simple linear model is proposed to show that the floating point operations time can be overlapped by the memory reference time and thus is negligible for such sparse operations with intensive memory reference. Various numerical results are presented, compared, analyzed and validated to confirm the proposed model, and the STREAM benchmark is also used to verify our results.
Keywords :
"Sparse matrices","Bandwidth","Memory management","Data structures","Algorithm design and analysis","Mathematical model","Numerical models"
Conference_Titel :
Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom), 2015 IEEE 12th Intl Conf on
DOI :
10.1109/UIC-ATC-ScalCom-CBDCom-IoP.2015.259