Title :
Collaborative Caching for Unknown Cache Sizes
Abstract :
In this work we first present a prioritized LRU model. For each memory access, a program specifies a priority. The loaded datum can be inserted in the middle of the cache stack between the LRU and MRU positions. It may be bypassed if the associated priority is too low. Prioritized LRU naturally organizes program accesses for all cache sizes. Alternatively, we describe a dynamic cache control scheme. Like prioritized LRU, each access is associated with a priority. The dynamic cache control compares the priority with the cache size and then chooses either the LRU or the MRU position for placing the data. As a result, a program is optimized for all instead of single cache sizes. In the discussion, we assume fully associative cache. The same idea can be applied to each set of set associative cache. Another problem is the instruction overhead now that each load and store is associated with a number. One way to reduce the cost is to use a few registers to hold the priority numbers and then include a few bits in a load or store instruction to indicate the priority register.
Keywords :
cache storage; LRU model; MRU positions; associative cache; collaborative caching; dynamic cache control scheme; instruction overhead; memory access; priority register; unknown cache sizes; Collaboration; Conferences; Hardware; Load modeling; Optimized production technology; Parallel architectures; Registers; collaborative caching; dynamic LRU-MRU determination; priority LRU;
Conference_Titel :
Parallel Architectures and Compilation Techniques (PACT), 2011 International Conference on
Conference_Location :
Galveston, TX
Print_ISBN :
978-1-4577-1794-9
DOI :
10.1109/PACT.2011.50