DocumentCode :
3704170
Title :
Hadoop Characterization
Author :
Icaro Alzuru;Kevin Long;Bhaskar Gowda;David Zimmerman;Tao Li
Author_Institution :
CISE, Univ. of Florida, Gainesville, FL, USA
Volume :
2
fYear :
2015
Firstpage :
96
Lastpage :
103
Abstract :
In the last decade, Warehouse Scale Computers (WSC) have grown in number and capacity while Hadoop became the de facto standard framework for Big data processing. Despite the existence of several benchmark suites, sizing guides, and characterization studies, there are few concrete guidelines for WSC designers and engineers who need to know how real Hadoop workloads are going to stress the different hardware subsystems of their servers. Available studies have shown execution statistics of Hadoop benchmarks but have not being able to extract meaningful and reusable results. Secondly, existing sizing guides provide hardware acquisition lists without considering the workloads. In this study, we propose a simple Big data workload differentiation, deliver general and specific conclusions about how demanding the different types of Hadoop workloads are for several hardware subsystems, and show how power consumption is influenced in each case. HiBench and Big-Bench suites were used to capture real time memory traces, and CPU, disk, and power consumption statistics of Hadoop. Our results show that CPU intensive and disk intensive workloads have a different behavior. CPU intensive workloads consume more power and memory bandwidth while disk intensive workloads usually require more memory. These and other conclusions presented in the paper are expected to help WSC designers to decide the hardware characteristics of their Hadoop systems, and better understand the behavior of big data workloads in Hadoop.
Keywords :
"Benchmark testing","Big data","Memory management","Hardware","Servers","Power demand","Bandwidth"
Publisher :
ieee
Conference_Titel :
Trustcom/BigDataSE/ISPA, 2015 IEEE
Type :
conf
DOI :
10.1109/Trustcom.2015.567
Filename :
7345480
Link To Document :
بازگشت