Abstract :
The buzz-word big-data (application) refers to the large-scale distributed applications that work on unprecedentedly large data sets. Google´s MapReduce framework and Apache´s Hadoop, its open-source implementation, are the defacto software system for big-data applications. An observation regarding these applications is that they generate a large amount of intermediate data, and these abundant information is thrown away after the processing finish. Motivated by this observation, we propose a data-aware cache framework for big-data applications, which is called Dache. In Dache, tasks submit their intermediate results to the cache manager. A task, before initiating its execution, queries the cache manager for potential matched processing results, which could accelerate its execution or even completely saves the execution. A novel cache description scheme and a cache request and reply protocol are designed. We implement Dache by extending the relevant components of the Hadoop project. Testbed experiment results demonstrate that Dache significantly improves the completion time of MapReduce jobs and saves a significant chunk of CPU execution time.
Keywords :
cache storage; parallel programming; public domain software; query processing; Apache Hadoop project; CPU execution time; Dache; Google MapReduce framework; buzz-word big-data application; cache description scheme; cache manager; cache request-reply protocol design; data-aware cache framework; defacto software system; large-scale distributed applications; potential matched processing; query processing; Acceleration; Context; Distributed databases; Indexes; Pricing; Protocols; Sorting; Big-data; Hadoop; MapReduce; cache management; distributed file system;