DocumentCode :
2384373
Title :
Fast Query Processing by Distributing an Index over CPU Caches
Author :
Ma, Xiaoqin ; Cooperman, Gene
Author_Institution :
Coll. of Comput. & Inf. Sci., Northeastern Univ., Boston, MA
fYear :
2005
fDate :
Sept. 2005
Firstpage :
1
Lastpage :
10
Abstract :
Data intensive applications on clusters often require requests quickly be sent to the node managing the desired data. In many applications, one must look through a sorted tree structure to determine the responsible node for accessing or storing the data. Examples include object tracking in sensor networks, packet routing over the Internet, request processing in publish-subscribe middleware, and query processing in database systems. When the tree structure is larger than the CPU cache, the standard implementation potentially incurs many cache misses for each lookup; one cache miss at each successive level of the tree. As the CPU-RAM gap grows, this performance degradation will only become worse in the future. We propose a solution that takes advantage of the growing speed of local area networks for clusters. We split the sorted tree structure among the nodes of the cluster. We assume that the structure will fit inside the aggregation of the CPU caches of the entire cluster. We then send a word over the network (as part of a larger packet containing other words) in order to examine the tree structure in another node´s CPU cache. We show that this is often faster than the standard solution, which locally incurs multiple cache misses while accessing each successive level of the tree. The principle is demonstrated with a cluster configured with Pentium III nodes connected with a Myrinet network. The new approach is shown to be 50% faster on this current cluster. In the future, the new approach is expected to have a still greater advantage as networks grow in speed, and as cache lines grow in length (greater cache miss penalty). This can be used to successfully overcome the inherent memory latency associated with cache misses
Keywords :
cache storage; local area networks; query processing; tree data structures; CPU caches; CPU-RAM gap; Myrinet network; Pentium III nodes; clusters; data intensive applications; local area networks; memory latency; performance degradation; query processing; sorted tree structure; Database systems; Degradation; IP networks; Local area networks; Middleware; Publish-subscribe; Query processing; Routing; Sensor systems; Tree data structures;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Cluster Computing, 2005. IEEE International
Conference_Location :
Burlington, MA
ISSN :
1552-5244
Print_ISBN :
0-7803-9486-0
Electronic_ISBN :
1552-5244
Type :
conf
DOI :
10.1109/CLUSTR.2005.347047
Filename :
4154090
Link To Document :
بازگشت