• DocumentCode
    1813836
  • Title

    Analysing the influence of InfiniBand choice on OpenMPI memory consumption

  • Author

    Perks, O. ; Beckingsale, D.A. ; Dawes, A.S. ; Herdman, J.A. ; Mazauric, Cyril ; Jarvis, S.A.

  • Author_Institution
    Dept. of Comput. Sci., Univ. of Warwick, Coventry, UK
  • fYear
    2013
  • fDate
    1-5 July 2013
  • Firstpage
    186
  • Lastpage
    193
  • Abstract
    The ever increasing scale of modern high performance computing platforms poses challenges for system architects and code developers alike. The increase in core count densities and associated cost of components is having a dramatic effect on the viability of high memory-per-core ratios. Whilst the available memory per core is decreasing, the increased scale of parallel jobs is testing the efficiency of MPI implementations with respect to memory overhead. Scalability issues have always plagued both hardware manufacturers and software developers, and the combined effects can be disabling. In this paper we address the issue of MPI memory consumption with regard to InfiniBand network communications. We reaffirm some widely held beliefs regarding the existence of scalability problems under certain conditions. Additionally, we present results testing memory-optimised runtime configurations and vendor provided optimisation libraries. Using Orthrus, a linear solver benchmark developed by AWE, we demonstrate these memory-centric optimisations and their performance implications. We show the growth of OpenMPI memory consumption (demonstrating poor scalability) on both Mellanox and QLogic InfiniBand platforms. We demonstrate a 616× increase in MPI memory consumption for a 64× increase in core count, with a default OpenMPI configuration on Mellanox. Through the use of the Mellanox MXM and QLogic PSM optimisation libraries we are able to observe a 117× and 115× reduction in MPI memory at application memory high water mark. This significantly improves the potential scalability of the code.
  • Keywords
    message passing; optimisation; parallel processing; software libraries; storage management; AWE; InfiniBand network communications; Mellanox MXM; OpenMPI memory consumption; Orthrus; QLogic InfiniBand platforms; QLogic PSM optimisation libraries; core count densities; high memory-per-core ratios; high performance computing platforms; linear solver benchmark; memory overhead; memory per core; memory-centric optimisations; memory-optimised runtime configurations; parallel jobs; scalability issues; scalability problems; Benchmark testing; Hardware; Libraries; Memory management; Optimization; Runtime; Scalability; HWM; InfiniBand; MPI; Memory; Parallel; Tools;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    High Performance Computing and Simulation (HPCS), 2013 International Conference on
  • Conference_Location
    Helsinki
  • Print_ISBN
    978-1-4799-0836-3
  • Type

    conf

  • DOI
    10.1109/HPCSim.2013.6641412
  • Filename
    6641412