Title :
SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience
Author :
Jose, Jithin ; Mingzhe Li ; Xiaoyi Lu ; Kandalla, K.C. ; Arnold, M.D. ; Panda, Dhabaleswar K.
Author_Institution :
Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH, USA
Abstract :
High Performance Computing (HPC) systems are becoming increasingly complex and are also associated with very high operational costs. The cloud computing paradigm, coupled with modern Virtual Machine (VM) technology offers attractive techniques to easily manage large scale systems, while significantly bringing down the cost of computation, memory and storage. However, running HPC applications on cloud systems still remains a major challenge. One of the biggest hurdles in realizing this objective is the performance offered by virtualized computing environments, more specifically, virtualized I/O devices. Since HPC applications and communication middlewares rely heavily on advanced features offered by modern high performance interconnects such as InfiniBand, the performance of virtualized InfiniBand interfaces is crucial. Emerging hardware-based solutions, such as the Single Root I/O Virtualization (SR-IOV), offer an attractive alternative when compared to existing software-based solutions. The benefits of SR-IOV have been widely studied for GigE and 10GigE networks. However, with InfiniBand networks being increasingly adopted in the cloud computing domain, it is critical to fully understand the performance benefits of SR-IOV in InfiniBand network, especially for exploring the performance characteristics and trade-offs of HPC communication middlewares (such as Message Passing Interface (MPI), Partitioned Global Address Space (PGAS)) and applications. To the best of our knowledge, this is the first paper that offers an in-depth analysis on SR-IOV with InfiniBand. Our experimental evaluations show that for the performance of MPI and PGAS point-to-point communication benchmarks over SR-IOV with InfiniBand is comparable to that of the native InfiniBand hardware, for most message lengths. However, we observe that the performance of MPI collective operations over SR-IOV with InfiniBand is inferior to native (non-virtualized) mode. We also evaluate the trade-offs of various - M to CPU mapping policies on modern multi-core architectures and present our experiences.
Keywords :
application program interfaces; cloud computing; computer networks; input-output programs; message passing; middleware; multiprocessing systems; parallel processing; virtualisation; HPC system; InfiniBand cluster; InfiniBand network; MPI; PGAS; SR-IOV support; VM technology; VM-to-CPU mapping policy; cloud computing; communication middleware; computation cost; high performance computing; high performance interconnect; memory cost; message passing interface; multicore architecture; partitioned global address space; single root input-output virtualization; storage cost; virtual machine; virtualization; virtualized input-output device; Bandwidth; Cloud computing; Electronics packaging; Hardware; Performance evaluation; Virtualization; Clusters; HPC; InfiniBand; SR-IOV; Virtualization;
Conference_Titel :
Cluster, Cloud and Grid Computing (CCGrid), 2013 13th IEEE/ACM International Symposium on
Conference_Location :
Delft
Print_ISBN :
978-1-4673-6465-2
DOI :
10.1109/CCGrid.2013.76