• DocumentCode
    1783224
  • Title

    Accelerating MPI Collective Communications through Hierarchical Algorithms Without Sacrificing Inter-Node Communication Flexibility

  • Author

    Parsons, Benjamin S. ; Pai, Vijay S.

  • Author_Institution
    Purdue Univ., West Lafayette, IN, USA
  • fYear
    2014
  • fDate
    19-23 May 2014
  • Firstpage
    208
  • Lastpage
    218
  • Abstract
    This paper presents and evaluates a universal algorithm to improve the performance of MPI collective communication operations on hierarchical clusters with many-core nodes. This algorithm exploits shared-memory buffers for efficient intra-node communication while still allowing the use of unmodified, hierarchy-unaware traditional collectives for inter-node communication (including collectives like Alltoallv). This algorithm improves on past works that convert a specific collective algorithm into a hierarchical version and are generally restricted to fan-in, fan-out, and All gather algorithms. Experimental results show impressive performance improvements utilizing a variety of collectives from MPICH as well as the closed-source Cray MPT for the inter-node communication. The experimental evaluation tests the new algorithms with as many as 65536 cores and sees speedups over the baseline averaging 14.2x for Alltoallv, 26x for All gather, and 32.7x for Reduce-Scatter. The paper further improves inter-node communication by utilizing multiple senders from the same shared memory buffer, achieving additional speedups averaging 2.5x. The discussion also evaluates special-purpose extensions to improve intra-node communication by returning shared memory or copy-on-write protected buffers from the collective.
  • Keywords
    application program interfaces; message passing; shared memory systems; Allgather algorithms; Alltoallv; MPI collective communications; MPICH; closed-source Cray MPT; copy-on-write protected buffers; fan-in algorithms; fan-out algorithms; hierarchical clusters; intra-node communication; many-core nodes; reduce-scatter; shared-memory buffers; special-purpose extensions; universal algorithm; Algorithm design and analysis; Clustering algorithms; Multicore processing; Optimization; Program processors; Vectors;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Parallel and Distributed Processing Symposium, 2014 IEEE 28th International
  • Conference_Location
    Phoenix, AZ
  • ISSN
    1530-2075
  • Print_ISBN
    978-1-4799-3799-8
  • Type

    conf

  • DOI
    10.1109/IPDPS.2014.32
  • Filename
    6877256