DocumentCode :
3299534
Title :
Highly Efficient Gang Scheduling Implementation
Author :
Ishikawa, Yozo
fYear :
1998
fDate :
07-13 Nov. 1998
Firstpage :
43
Lastpage :
43
Abstract :
A new and more highly efficient gang scheduling implementation technique is the basis for this paper. Network preemption, in which network interface contexts are saved and restored, has already been proposed to enable parallel applications to perform efficent user-level communication. This network preemption technique can be used to for detecting global state, such as deadlock, of a parallel program execution. A gang scheduler, SCore-D, using the network preemption technique is implemented with PM, a user-level communication library. This paper evaluates network preemption gang scheduling overhead using eight NAS parallel benchmark programs. The results of this evaluation illustrate that the saving and restoring network contexts occupies almost half of the total gang scheduling overhead. A new mechanism, having multiple network contexts and merely switching the context pointers without saving and restoring the network contexts, is proposed. The NAS parallel benchmark evaluation shows that gang scheduling overhead is almost halved. The maximum gang scheduling overhead among benchmark programs is less than 10%, with a 40msec time slice on 64 single-way PentiumPros, connected by Myrinet to form a PC cluster. The numbers of secondary cache misses are counted, and it is found that network preemption with multiple network contexts is more cache-effective than a single network context. The observed scheduling overhead for applications running on 64 nodes can only be a small percent of the execution time. The gang scheduling overheads of switching two NAS parallel benchmark programs are also evaluated. The additional overheads are less than 2% in most cases, with a 100msec time slice on 64 nodes. This slightly higher scheduling overheads than for switching a single parallel process comes from more frequent cache misses. This paper contributes the following findings; i) gang scheduling overhead with network preemption can be sufficiently low, ii) proposed network preemption with multiple network contexts is more cache-effective than a single network context, and, iii) network preemption can be applied to detect global states of user parallel processes. SCore-D gang scheduler realized by network preemption can utilize processor resources by the detecting the gl- obal state of user parallel processes. Network preemption with multiple contexts exhibits highly efficient gang scheduling. The combination of low scheduling overhead and the global state detection mechanism achieves an interactive parallel programming where parallel program development and the production run of parallel programs can be mixed freely.
Keywords :
distributed termination; gang scheduling; network preemption; user-level communication; Communication switching; Context; Delay; Frequency; Libraries; Network interfaces; Parallel programming; Processor scheduling; Production; System recovery; distributed termination; gang scheduling; network preemption; user-level communication;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Supercomputing, 1998.SC98. IEEE/ACM Conference on
Print_ISBN :
0-8186-8707-X
Type :
conf
DOI :
10.1109/SC.1998.10007
Filename :
1437330
Link To Document :
بازگشت