A Global Survey on Data Deduplication

A Global Survey on Data Deduplication

Shubhanshi Singhal, Pooja Sharma, Rajesh Kumar Aggarwal, Vishal Passricha
Copyright: © 2018 |Pages: 24
DOI: 10.4018/IJGHPC.2018100103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This article describes how data deduplication efficiently eliminates the redundant data by selecting and storing only single instance of it and becoming popular in storage systems. Digital data is growing much faster than storage volumes, which shows the importance of data deduplication among scientists and researchers. Data deduplication is considered as most successful and efficient technique of data reduction because it is computationally efficient and offers a lossless data reduction. It is applicable to various storage systems, i.e. local storage, distributed storage, and cloud storage. This article discusses the background, components, and key features of data deduplication which helps the reader to understand the design issues and challenges in this field.
Article Preview
Top

Introduction

The digital data is rapidly increasing in size and complexity (Zhang & Huang, 2016). The measured amount of data produced in 2016 was 16.1 Zettabytes and it is estimated that, nearly 163 Zettabytes of data will be produced in 2025 (Reinsel, Gantz, & Rydning, 2017). A study conducted by Microsoft Research shows that nearly 50% and 80% of the data are redundant in primary and secondary memory respectively (El-Shimi, et al., 2012). This explosive growth of digital data makes the data reduction as an essential component of large storage systems. Data deduplication offers an efficient mechanism of data storage which decreases the cost of the storage by eliminating duplicate data and used by nearly 80% of large-scale storage companies (DuBois, Amaldas, & Sheppard, 2011) (Meyer & Bolosky, 2012).

In deduplication process, data files are divided into small multiple blocks known as chunks then secure hash-digest is calculated for each chunk using secure hash mechanism (i.e. Rabin fingerprint, SHA-1, MD-5). Two different methods to divide the files into chunks are: fixed size chunking and variable size chunking. The calculated secure hash-digest is known as fingerprint of the chunk. By comparing these fingerprints, duplicate chunks are identified. Deduplication technique retains only unique copy of chunk by eliminating identical chunks but the system performing deduplication requires high RAM to implement a large index. It efficiently manages the disk storage and network bandwidth. Conventionally, data compression mechanisms were used for data reduction. Dictionary model-based algorithms LZ77 (Ziv & Lempel, 1977), LZ78 (Ziv & Lempel, 1978), LZO (Oberhumer, 1997), LZW (Nelson, 1989), and DEFLATE (Deutsch, 1996) were used to detect repetition for short strings. A weak fingerprint of each string is computed and compared byte by byte hence it affects the much smaller region of the file.

The performance of data deduplication is much better than traditional compression schemes for large-scale storage systems as it works in dual mode. In first mode, deduplication selects and removes redundant data at file level or chunk level and in second mode, it converts the data chunks into fingerprints with the help of secure hash function. These fingerprints are matched in deduplication while earlier methods of compression used byte-level compression. This modern method reduces the original data into 160 bytes of fingerprints which is much smaller than the size of original data which makes it acceptable for large-scale storage systems.

The removal mechanism of redundancy in data deduplication works under four stages that are chunking, fingerprinting, indexing, and storage management. The workflow of data deduplication is shown in figure 1. The input data stream is partitioned into same size chunks and each chunk is uniquely identified by its fingerprint. A separate list i.e. metadata is designed to maintain the list of chunks which helps to regenerate the original file. An index is maintained in RAM for faster matching of the fingerprints but sometimes, RAM capacity overflows by the size of fingerprints, then index is shifted in secondary memory. On-disk index look-up process manages the partial loading of large index into RAM. For accelerating the process, optimization techniques like DDFS are predefined (Zhu, Li, & Patterson, 2008). Generally, the unique chunks are stored in large memory blocks (Zhu, Li, & Patterson, 2008). Therefore, the recovery of each file causes several input/output operations to the containers. At the end of deduplication, the chunks of a file are placed into several containers. The detailed information about deduplication is discussed in the coming sections.

Figure 1.

Overview of data deduplication process

IJGHPC.2018100103.f01

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 2 Issues (2023)
Volume 14: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing