Title :
High Performance Memory Management for a Multi-core Architecture
Author :
Liu, Mengxiao ; Ji, Weixing ; Wang, Zuo ; Li, Jiaxin ; Pu, Xing
Author_Institution :
Sch. of Comput. Sci. & Technol., Beijing Inst. of Technol., Beijing, China
Abstract :
Architecture and memory system are two important factors that influence the performance of parallel processing systems. This paper based on a proposed real scalable, triple-based multi-core architecture providing hardware level support for object-oriented methodology. However, the memory wall is still the bottleneck of the whole system performance. We present hierarchical shared memory (HSM) architecture that is hierarchically constructed memory shared by multi-cores. A new partially-inclusive mapping policy used to map data among different levels of cache and memory, which facilitates the coherence of shared memory. And a novel objects management also proposed. In multi-core systems, all cores share the DRAM bandwidth makes it become a critical shared resource. In order to resolve problems like starvation, complexity, and unpredictable DRAM access latency, we present a DRAM access management scheme-fair dynamic pipelining (FDP) memory access scheduling with two key features. First, the scheme avoids unexpected long latencies or starvation of memory requests using the dynamic pipeline arrangement policy. Second, it provides an alterable priority strategy to make the response of memory more fairly. Comparisons with other common approaches shown that our objects management is predominant in spatial and temporal aspects on memory parallel access efficiency and costs less storage space to organize objects than link structured object organization. The experiment result shown that the FDP scheduling makes the bandwidth shares to achieve desired average latencies for multi cores memory accesses.
Keywords :
DRAM chips; cache storage; parallel architectures; parallel processing; shared memory systems; storage management; DRAM access management; DRAM bandwidth; cache; fair dynamic pipelining; hardware level support; hierarchical shared memory; high performance memory management; memory access scheduling; memory wall; multicore architecture; object-oriented methodology; parallel processing systems; partially-inclusive mapping policy; Bandwidth; Delay; Hardware; Memory architecture; Memory management; Parallel processing; Pipeline processing; Random access memory; Scheduling; System performance; CMPs; DRAM; cache mapping; memory hierarchy; memory wall; memroy access scheduling; object management; object-oriented;
Conference_Titel :
Computer and Information Technology, 2009. CIT '09. Ninth IEEE International Conference on
Conference_Location :
Xiamen
Print_ISBN :
978-0-7695-3836-5
DOI :
10.1109/CIT.2009.120