DocumentCode :
456626
Title :
Using Machine Learning to Allocate Parallel Workload
Author :
Long, Shun
Author_Institution :
Dept. of Comput. Sci., JiNan Univ., Guangzhou
Volume :
1
fYear :
2006
fDate :
Aug. 30 2006-Sept. 1 2006
Firstpage :
393
Lastpage :
396
Abstract :
It is believed that optimal workload allocation cannot be achieved without considering the cost of parallelism in a given environment. This paper presents a machine learning approach to allocate parallel workload in a cost-aware manner. This instance-based learning approach uses static program features to classify programs, before deciding the best workload allocation scheme based on its prior experience with similar programs. Experimental results on 76 Java benchmarks show that it can find the optimal workload allocation schemes for 36 out of them and over 85% of the best speedups on the other 19. It shows that this approach can efficiently allocate parallel workload among Java threads and achieve optimal or suboptimal performance
Keywords :
Java; learning (artificial intelligence); multi-threading; resource allocation; Java thread; instance-based learning approach; machine learning; parallel workload allocation; static program; Computer science; Concurrent computing; Cost function; High performance computing; Java; Machine learning; Parallel processing; Processor scheduling; Virtual machining; Yarn;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Innovative Computing, Information and Control, 2006. ICICIC '06. First International Conference on
Conference_Location :
Beijing
Print_ISBN :
0-7695-2616-0
Type :
conf
DOI :
10.1109/ICICIC.2006.181
Filename :
1691822
Link To Document :
بازگشت