DocumentCode :
1595891
Title :
Block Size Optimization in Deduplication Systems
Author :
Constantinescu, Cornel ; Pieper, Jan ; Li, Tiancheng
Author_Institution :
Almaden Res. Center, IBM, San Jose, CA
fYear :
2009
Firstpage :
442
Lastpage :
442
Abstract :
Data deduplication is a popular dictionary based compression method in storage archival and backup. The deduplication efficiency improves for smaller chunk sizes, however the files become highly fragmented requiring many disk accesses during reconstruction or chattiness in a client-server architecture. Within the sequence of chunks that an object (file) is decomposed into, sub-sequences of adjacent chunks tend to repeat. We exploit this insight to optimize the chunk sizes by joining repeated sub-sequences of small chunks into new super chunks with the constraint to achieve practically the same matching performance. We employ suffix arrays to find these repeating sub-sequences and to determine a new encoding that covers the original sequence. With super chunks we significantly reduce fragmentation, improving reconstruction time and the overall deduplication ratio by lowering the amount of metadata. As a result, fewer chunks are used to represent a file, reducing the number of disk accesses needed to reconstruct the file and requiring fewer entries in the chunk dictionary and fewer hashes to encode a file. To encode a sequence of chunks we proved two facts: (1) any subsequence that repeats is part of some super chunk (su- permaximal in Figure 1) - therefore the dictionary contains just supermaximals and non-repeats, and (2) maximals are the only repeats not covered (overlapped) by the containing supermaximals - so once we discover the maximals with the suffix array, and encode them, there is no need for maintaining auxiliary data structures like bit masks, to guarantee the coverage (encoding) of the entire object. Our experimental evaluation (Figure 2) yields a reduction in fragmentation between 89%-97% and a reduction of dictionary metadata (number of entries) between 80%-97%, without increasing the total amount of storage required for unique chunks.
Keywords :
client-server systems; data compression; data structures; dictionaries; information retrieval systems; optimisation; auxiliary data structures; block size optimization; client-server architecture; data deduplication; deduplication systems; dictionary based compression method; disk accesses; storage archival; Constraint optimization; Data compression; Data structures; Dictionaries; Encoding; Optimization methods; Samarium; Data deduplication; File chunking; Information archiving; Storage backup;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Data Compression Conference, 2009. DCC '09.
Conference_Location :
Snowbird, UT
ISSN :
1068-0314
Print_ISBN :
978-1-4244-3753-5
Type :
conf
DOI :
10.1109/DCC.2009.51
Filename :
4976496
Link To Document :
بازگشت