DocumentCode
2625713
Title
MultiGPU computing using MPI or OpenMP
Author
Noaje, Gabriel ; Krajecki, Michaël ; Jaillet, Christophe
Author_Institution
CReSTIC - SysCom, Univ. of Reims Champagne-Ardenne, Reims, France
fYear
2010
fDate
26-28 Aug. 2010
Firstpage
347
Lastpage
354
Abstract
The GPU computing follows the trend of GPGPU, driven by the innovations in both hardware and programming languages made available to nongraphic programmers. Since some problems require an important time to solve or data quantities that do not fit on one single GPU, the logical continuation was to make use of multiple GPUs. In order to use a multiGPU environment in a general way, our paper presents an approach where each card is driven by either a [heavyweight MPI] process or a [lightweight OpenMP] thread. We compare the two models in terms of performance, implementation complexity and particularities, as well as overhead implied by the mixed code. We show that the best performance is obtained when we use OpenMP. We also note that using “pinned memory” we further improve the execution time. The next objective will be to create a three-level multiGPU environment with internode communication (processes, distributed memory), intranode GPUs management (threads, shared memory) and computation inside the GPU cards.
Keywords
application program interfaces; computer graphic equipment; message passing; programming languages; GPU cards; MPI; OpenMP; distributed memory; hardware languages; multiGPU computing; nongraphic programmers; pinned memory; programming languages; Graphics; Graphics processing unit; Instruction sets; Kernel; Message systems; Programming;
fLanguage
English
Publisher
ieee
Conference_Titel
Intelligent Computer Communication and Processing (ICCP), 2010 IEEE International Conference on
Conference_Location
Cluj-Napoca
Print_ISBN
978-1-4244-8228-3
Electronic_ISBN
978-1-4244-8230-6
Type
conf
DOI
10.1109/ICCP.2010.5606414
Filename
5606414
Link To Document