DocumentCode :
2483703
Title :
A framework for efficient and scalable execution of domain-specific templates on GPUs
Author :
Sundaram, Narayanan ; Raghunathan, Anand ; Chakradhar, Srimat T.
Author_Institution :
NEC Labs. America, Princeton, NJ, USA
fYear :
2009
fDate :
23-29 May 2009
Firstpage :
1
Lastpage :
12
Abstract :
Graphics processing units (GPUs) have emerged as important players in the transition of the computing industry from sequential to multi- and many-core computing. We propose a software framework for execution of domain-specific parallel templates on GPUs, which simultaneously raises the abstraction level of GPU programming and ensures efficient execution with forward scalability to large data sizes and new GPU platforms. To achieve scalable and efficient GPU execution, our framework focuses on two critical problems that have been largely ignored in previous efforts-processing large data sets that do not fit within the GPU memory, and minimizing data transfers between the host and GPU. Our framework takes domain-specific parallel programming templates that are expressed as parallel operator graphs, and performs operator splitting, of-fload unit identification, and scheduling of off-loaded computations and data transfers between the host and the GPU, to generate a highly optimized execution plan. Finally, a code generator produces a hybrid CPU/GPU program in accordance with the derived execution plan, that uses lower-level frameworks such as CUDA. We have applied the proposed framework to templates from the recognition domain, specifically edge detection kernels and convolutional neural networks that are commonly used in image and video analysis. We present results on two different GPU platforms from NVIDIA (a Tesla C870 GPU computing card and a GeForce 8800 graphics card) that demonstrate 1.7-7.8X performance improvements over already accelerated baseline GPU implementations. We also demonstrate scalability to input data sets and application memory footprints of 6 GB and 17 GB, respectively, on GPU platforms with only 768 MB and 1.5 GB of memory.
Keywords :
coprocessors; edge detection; neural nets; parallel programming; program compilers; scheduling; CPU; CUDA; GeForce 8800 graphics card; NVIDIA; Tesla C870 GPU computing card; code generator; computing industry; convolutional neural networks; domain-specific templates; edge detection kernels; forward scalability; graphics processing unit programming; image analysis; lower-level frameworks; many-core computing; multi-core computing; of-fload unit identification; off-loaded computation scheduling; parallel operator graphs; parallel programming templates; software framework; video analysis; Central Processing Unit; Computer industry; Concurrent computing; Graphics; High performance computing; Hybrid power systems; Job shop scheduling; Parallel programming; Processor scheduling; Scalability;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Parallel & Distributed Processing, 2009. IPDPS 2009. IEEE International Symposium on
Conference_Location :
Rome
ISSN :
1530-2075
Print_ISBN :
978-1-4244-3751-1
Electronic_ISBN :
1530-2075
Type :
conf
DOI :
10.1109/IPDPS.2009.5161039
Filename :
5161039
Link To Document :
بازگشت