DocumentCode :
2422758
Title :
Paradigms for Parallel Computation
Author :
Speyer, Gil ; Freed, Natalie ; Akis, Richard ; Stanzione, Dan ; Mack, Eric
Author_Institution :
Arizona State Univ., Tempe, AZ
fYear :
2008
fDate :
14-17 July 2008
Firstpage :
486
Lastpage :
494
Abstract :
Message passing, as implemented in message passing interface (MPI), has become the industry standard for programming on distributed memory parallel architectures, while the threading on shared memory machines is typically implemented in OpenMP. Outstanding performance has been achieved with these methods, but only on a relatively small number of codes, requiring expert tuning, particularly as system size increases. With the advent of multicore/manycore microprocessors, and continuing scaling in the size of systems, an inflection point may be nearing that will require a substantial shift in the way large scale systems are programmed to maintain productivity. The parallel paradigms project was undertaken to evaluate the nearterm readiness of a number of emerging ideas in parallel programming, with specific emphasis on their applicability to applications in the user productivity enhancement and technology transfer (PET) electronics, networking, and systems/C4I (ENS) focus area. The evaluation included examinations of usability, performance, scalability, and support for fault tolerance. Applications representative of ENS problems were ported to each of the evaluated languages, along with a set of ldquodwarfrdquo codes representing a broader workload. In addition, a user study was undertaken where teaching modules were developed for each paradigm, and delivered to groups of both novice and expert programmers to measure productivity. Results was presented from six paradigms currently undergoing ongoing evaluation. Experiences with each of these models was presented, including performance of applications re-coded across these models and feedback from users.
Keywords :
application program interfaces; distributed memory systems; fault tolerant computing; message passing; parallel architectures; parallel programming; shared memory systems; C4I; ENS problems; OpenMP; applications recoding; distributed memory parallel architecture programming; dwarf codes; fault tolerance; industry standard; message passing interface; multicore microprocessors; parallel computation; parallel programming; shared memory machines; technology transfer electronics; user productivity enhancement; Concurrent computing; Large-scale systems; Message passing; Microprocessors; Multicore processing; Parallel architectures; Parallel programming; Positron emission tomography; Productivity; Technology transfer; Parallel processing; languages;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
DoD HPCMP Users Group Conference, 2008. DOD HPCMP UGC
Conference_Location :
Seattle, WA
Print_ISBN :
978-1-4244-3323-0
Type :
conf
DOI :
10.1109/DoD.HPCMP.UGC.2008.18
Filename :
4755913
Link To Document :
بازگشت