DocumentCode
692864
Title
Globalizing selectively: Shared-memory efficiency with address-space separation
Author
Mahajan, Nilesh ; Pitambare, Uday ; Chauhan, Anamika
Author_Institution
Indiana Univ., Bloomington, IN, USA
fYear
2013
fDate
17-22 Nov. 2013
Firstpage
1
Lastpage
12
Abstract
It has become common for MPI-based applications to run on shared-memory machines. However, MPI semantics do not allow leveraging shared memory fully for communication between processes from within the MPI library. This paper presents an approach that combines compiler transformations with a specialized runtime system to achieve zero-copy communication whenever possible by proving certain properties statically and globalizing data selectively by altering the allocation and deallocation of communication buffers. The runtime system provides dynamic optimization, when such proofs are not possible statically, by copying data only when there are write-write or read-write conflicts. We implemented a prototype compiler, using ROSE, and evaluated it on several benchmarks. Our system produces code that performs better than MPI in most cases and no worse than MPI, tuned for shared memory, in all cases.
Keywords
application program interfaces; globalisation; message passing; program compilers; shared memory systems; MPI library; MPI semantics; MPI-based applications; ROSE; address-space separation; communication buffer deallocation; compiler transformations; data selectively globalization; prototype compiler; read-write conflicts; shared-memory efficiency; shared-memory machines; write-write conflicts; zero-copy communication; Cloning; Libraries; Programming; Receivers; Resource management; Runtime; Semantics;
fLanguage
English
Publisher
ieee
Conference_Titel
High Performance Computing, Networking, Storage and Analysis (SC), 2013 International Conference for
Conference_Location
Denver, CO
Print_ISBN
978-1-4503-2378-9
Type
conf
DOI
10.1145/2503210.2503275
Filename
6877450
Link To Document