Title :
Hypertasking: automatic array and loop partitioning on the iPSC
Abstract :
May data parallel problems share a similar domain decomposition strategy for hypercube architectures. By mapping sub-blocks of each array variable to the nodes in a manner that matches the hypercube topology to the geometry of the underlying problem, one can maximize locality of reference and minimize message passing overhead. Each node then applies the same algorithm used in the original sequential code to its subset of the data, synchronizing with its neighbors as necessary. The author describes a domain decomposition tool that transforms sequential C programs with comment directives into SPMD (single program multiple data) C programs which can run on hypercubes of any size. Directives allow the user to mark certain arrays to be distributed, to limit loops to indices that refer to the local sub-block of an array, and to cause exchanges of boundary values to occur between nodes
Keywords :
C language; data structures; hypercube networks; parallel architectures; parallel programming; SPMD; array variable; boundary values; comment directives; data parallel problems; domain decomposition strategy; domain decomposition tool; hypercube architectures; hypercube topology; hypertasking; iPSC; local sub-block; loop partitioning; message passing overhead; sequential C programs; sequential code; single program multiple data; Computer architecture; Concurrent computing; Finite difference methods; Hypercubes; Mathematics; Matrix decomposition; Memory architecture; Message passing; Topology; User interfaces;
Conference_Titel :
System Sciences, 1991. Proceedings of the Twenty-Fourth Annual Hawaii International Conference on
Conference_Location :
Kauai, HI
DOI :
10.1109/HICSS.1991.184006