DocumentCode
426876
Title
OMPI: Optimizing MPI Programs using Partial Evaluation
Author
Ogawa, Hirotaka ; Matsuoka, Satoshi
Author_Institution
The University of Tokyo, Japan
fYear
1996
fDate
1996
Firstpage
37
Lastpage
37
Abstract
MPI is gaining acceptance as a standard for message-passing in high-performance computing, due to its powerful and flexible support of various communication styles. However, the complexity of its API poses significant software overhead, and as a result, applicability of MPI has been restricted to rather regular, coarse-grained computations. Our OMPI (Optimizing MPI) system removes much of the excess overhead by employing partial evaluation techniques, which exploit static information of MPI calls. Because partial evaluation alone is insufficient, we also utilize template functions for further optimization. To validate the effectiveness for our OMPI system, we performed baseline as well as more extensive benchmarks on a set of application cores with different communication characteristics, on the 64-node Fujitsu AP1000 MPP. Benchmarks show that OMPI improves execution efficiency by as much as factor of two for communication-intensive application core with minimal code increase. It also performs significantly better than previous dynamic optimization technique.
Keywords
MPI; SUIF; communication optimization; message passing; parallel computing; partial evaluation; Application software; Communication standards; Message passing; Parallel languages; Parallel processing; Power engineering and energy; Power engineering computing; Runtime library; Software libraries; Writing; MPI; SUIF; communication optimization; message passing; parallel computing; partial evaluation;
fLanguage
English
Publisher
ieee
Conference_Titel
Supercomputing, 1996. Proceedings of the 1996 ACM/IEEE Conference on
Print_ISBN
0-89791-854-1
Type
conf
DOI
10.1109/SUPERC.1996.183539
Filename
1392908
Link To Document