Abstract :
Caching is a technique first used by memory management to reduce bus traffic and latency of data access. Web traffic has increased tremendously since the beginning of the 1990 s. With the significant increase of Web traffic, caching techniques are applied to Web caching to reduce network traffic, user-perceived latency, and server load by caching the documents in local proxies. In this paper, we analyzed both advantages and disadvantages of some current Web cache replacement algorithms including lowest relative value algorithm, least weighted usage algorithm and least unified-value (LUV) algorithm. Based on our analysis, we proposed a new algorithm, called least grade replacement (LGR), which takes recency, frequency, perfect-history, and document size into account for Web cache optimization. The optimal recency coefficients were determined by using 2- and 4-way set associative caches. The cache size was varied from 32 k to 256 k in the simulation. The simulation results showed that the new algorithm (LGR) is better than LRU and LFU in terms of hit ratio (BR) and byte hit ratio (BHR).
Keywords :
Internet; cache storage; content-addressable storage; Web cache optimization; Web cache replacement algorithm; Web traffic; associative caches; latency; least grade page replacement algorithm; least unified-value algorithm; least weighted usage algorithm; memory management; Algorithm design and analysis; Data engineering; Data mining; Delay; Frequency; Information science; Knowledge engineering; Software algorithms; Statistics; Telecommunication traffic;