DocumentCode :
2784831
Title :
Minimizing Latency in Serving Requests through Differential Template Caching in a Cloud
Author :
Jeswani, Deepak ; Gupta, Manish ; De, Pradipta ; Malani, Arpit ; Bellur, Umesh
Author_Institution :
IBM Res., New Delhi, India
fYear :
2012
fDate :
24-29 June 2012
Firstpage :
269
Lastpage :
276
Abstract :
In Software-as-a-Service (SaaS) cloud delivery model, a hosting center deploys a Virtual Machine (VM) image template on a server on demand. Image templates are usually maintained in a central repository. With geographically dispersed hosting centers, time to transfer a large, often GigaByte sized, template file from the repository faces high latency due to low Internet bandwidth. An architecture that maintains a template cache, collocated with the hosting centers, can reduce request service latency. Since templates are large in size, caching complete templates is prohibitive in terms of storage space. In order to optimize cache space requirement, as well as, to reduce transfers from the repository, we propose a differential template caching technique, called DiffCache. A difference file or a patch between two templates, that have common components, is small in size. DiffCache computes an optimal selection of templates and patches based on the frequency of requests for specific templates. A template missing in the cache can be generated if any cached template can be patched with a cached patch file, thereby saving the transfer time from the repository at the cost of relatively small patching time. We show that patch based caching coupled with intelligent population of the cache can lead to a 90% improvement in service request latency when compared with caching only template files.
Keywords :
cache storage; cloud computing; virtual machines; DiffCache; Internet bandwidth; SaaS cloud delivery model; Software-as-a-Service; VM image template; cache space requirement optimization; cloud computing; difference file; differential template caching technique; geographically dispersed hosting centers; large sized template file transfer; patch based caching; patch file; patching time; request service latency minimization; request service latency reduction; specific template request frequency; storage space; template cache; transfer time; virtual machine; Bandwidth; Cloud computing; Data structures; Linux; Peer to peer computing; Servers; Block-based differencing and patching; Cache; Cloud Computing; Virtual Appliance;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Cloud Computing (CLOUD), 2012 IEEE 5th International Conference on
Conference_Location :
Honolulu, HI
ISSN :
2159-6182
Print_ISBN :
978-1-4673-2892-0
Type :
conf
DOI :
10.1109/CLOUD.2012.17
Filename :
6253515
Link To Document :
بازگشت