DocumentCode :
1915248
Title :
Low-latency Memory-Mapped I/O for Data-Intensive Applications on Fast Storage Devices
Author :
Nae Young Song ; Young Jin Yu ; Woong Shin ; Hyeonsang Eom ; Heon Young Yeom
Author_Institution :
Dept. of Comput. Sci. & Eng., Seoul Nat. Univ., Seoul, South Korea
fYear :
2012
fDate :
10-16 Nov. 2012
Firstpage :
766
Lastpage :
770
Abstract :
Thesedays, along with read()/write(), mmap() is used to file I/O in data-intensive applications as an alternative method of I/O on emerging low-latency device such as flash-based SSD. Although utilizing memory-mapped file I/O have many advantages, it does not produce much benefit when combined with large-scale data and fast storage devices. When the working set of an application accessing file with mmap() is larger than the size of physical memory, the I/O performance is severely degraded compared to the application with read/write(). This is mainly due to the virtual memory subsystem that does not reflect the performance feature of the underlying storage device. In this paper, we examined linux virtual memory subsystem and mmap() I/O path to figure out the influence of low-latency storage devices on the existing virtual memory subsystem. Also, we suggest some optimization policies to reduce the overheads of mmap() I/O and implement the prototype in a recent Linux kernel. Our solution guarantees that 1) memory-mapped I/O will be several times faster than read-write I/O when cache-hit ratio becomes high, and 2) the former will show at least the performance of the latter even when cache-miss frequently occurs and the overhead of mapping/unmapping pages becomes significant, which are not achievable by the existing virtual memory subsystem.
Keywords :
Linux; input-output programs; storage management; Linux virtual memory subsystem; cache-hit ratio; cache-miss; data-intensive application; fast storage device; flash-based SSD; input-output performance; low-latency device; low-latency memory-mapped input-output; mapping-unmapping page overhead; mmap input-output path; optimization policy; physical memory; virtual memory subsystem;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion:
Conference_Location :
Salt Lake City, UT
Print_ISBN :
978-1-4673-6218-4
Type :
conf
DOI :
10.1109/SC.Companion.2012.105
Filename :
6495887
Link To Document :
بازگشت