DocumentCode :
263642
Title :
Performance Evaluation of Read and Write Operations in Hadoop Distributed File System
Author :
Krishna, Talluri Lakshmi Siva Rama ; Ragunathan, Thirumalaisamy ; Battula, Sudheer Kumar
Author_Institution :
Jawaharlal Nehru Inst. of Adv. Studies, Hyderabad, India
fYear :
2014
fDate :
13-15 July 2014
Firstpage :
110
Lastpage :
113
Abstract :
Hadoop Distributed File System (HDFS) is the core component of Apache Hadoop project. In HDFS, the computation is carried out in the nodes where relevant data is stored. Hadoop also implemented a parallel computational paradigm named as Map-Reduce. In this paper, we have measured the performance of read and write operations in HDFS by considering small and large files. For performance evaluation, we have used a Hadoop cluster with five nodes. The results indicate that HDFS performs well for the files with the size greater than the default block size and performs poorly for the files with the size less than the default block size.
Keywords :
parallel databases; performance evaluation; Apache Hadoop project; HDFS; Hadoop Distributed File System; Hadoop cluster; Map-Reduce; default block size; parallel computational paradigm; performance evaluation; read operations; write operations; Educational institutions; Fault tolerance; File systems; Google; Operating systems; Performance evaluation; Writing; Distributed File System; Hadoop; Hadoop Distributed File System; Map-Reduce;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Parallel Architectures, Algorithms and Programming (PAAP), 2014 Sixth International Symposium on
Conference_Location :
Beijing
ISSN :
2168-3034
Print_ISBN :
978-1-4799-3844-5
Type :
conf
DOI :
10.1109/PAAP.2014.49
Filename :
6916446
Link To Document :
بازگشت