Title :
Predicting cache space contention in utility computing servers
Author :
Solihin, Yan ; Guo, Fei ; Kim, Seongbeom
Author_Institution :
Dept. of Electr. & Comput. Eng., North Carolina State Univ., Raleigh, NC, USA
Abstract :
The need to provide performance guarantee in high performance servers has long been neglected. Providing performance guarantee in current and future servers is difficult because fine-grain resources, such as on-chip caches, are shared by multiple processors or thread contexts. Although interthread cache sharing generally improves the overall throughput of the system, the impact of cache contention on the threads that share it is highly non-uniform: some threads may be slowed down significantly, while others are not. This may cause severe performance problems such as sub-optimal throughput, cache thrashing, and thread starvation for threads that fail to occupy sufficient cache space to make good progress. Clearly, this situation is not desirable when performance guarantee needs to be provided, such as in utility computing servers. Unfortunately, there is no existing model that allows extensive investigation of the impact of cache sharing. To allow such a study, we propose an inductive probability model to predict the impact of cache sharing on co-scheduled threads. The input to the model is the isolated L2 circular sequence profile of each thread, which can be easily obtained on-line or off-line. The output of the model is the number of extra L2 cache misses for each thread due to cache sharing. We validate the model against a cycle-accurate simulation that implements a dual-core chip multi-processor (CMP architecture), on fourteen pairs of mostly SPEC benchmarks. The model achieves an average error of only 3.9%.
Keywords :
cache storage; multi-threading; multiprocessing systems; system-on-chip; utility programs; L2 cache miss; cache thrashing; co-scheduled thread; dual-core chip multi-processor; fine-grain resources; high performance server; multiple processor; on-chip caches; probability model; thread starvation; utility computing server; Art; Bars; Career development; Distributed processing; Hardware; High performance computing; Predictive models; Throughput; Web server; Yarn;
Conference_Titel :
Parallel and Distributed Processing Symposium, 2005. Proceedings. 19th IEEE International
Print_ISBN :
0-7695-2312-9
DOI :
10.1109/IPDPS.2005.354