Secure Data Deduplication of Encrypted Data in Cloud

Secure Data Deduplication of Encrypted Data in Cloud

Sumit Kumar Mahana, Rajesh Kumar Aggarwal
DOI: 10.4018/978-1-5225-7335-7.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In the present digital scenario, data is of prime significance for individuals and moreover for organizations. With the passage of time, data content being produced increases exponentially, which poses a serious concern as the huge amount of redundant data contents stored on the cloud employs a severe load on the cloud storage systems itself which cannot be accepted. Therefore, a storage optimization strategy is a fundamental prerequisite to cloud storage systems. Data deduplication is a storage optimization strategy that is used for deleting identical copies of redundant data, optimizing bandwidth, improves utilization of storage space, and hence, minimizes storage cost. To guarantee the security parameter, the data which is stored on the cloud must be in an encrypted form to ensure the security of the stored data. Consequently, executing deduplication safely over the encrypted information in the cloud seems to be a challenging job. This chapter discusses various existing data deduplication techniques with a notion of securing the data on the cloud that addresses this challenge.
Chapter Preview
Top

Introduction

Recent technology improvements have given rise to acceptance and progress of cloud computing. Through various advantages of distributed storage, for instance, ease of access, optimizing bandwidth etc. end users across the globe be likely to move their valuable information to secure distributed platform like cloud storage. According to the report by International Data Corporation (IDC), the huge data content in the whole world will reach around 40 trillion gigabytes by the year 2020 (Chen, Mu, Yang & Guo, 2015; Gantz & Reinsel, 2012). As the quantity of the data content which is being produced continuously increases with the passage of time, there is an urgency to eliminate the duplicate data content which is being stored on the distributed data storage medium like cloud storage to give sufficient storage to the users (Movaliya & Shah, 2016). But, when outsourcing the data to a third party like cloud storage server; it causes the security and privacy issues to turn out to be a serious concern. Distributed storage providers utilize distinctive strategies to enhance storage effectiveness and one of the prominent strategies utilized by numerous cloud storage providers is deduplication, which claims to be saving a huge amount of storage (Douceur, Adya, Bolosky, Simon & Theimer, 2002). Figure 1 illustrates the data deduplication process. Data deduplication, at times also named as intelligent compression or single-instance storage which is frequently used in combination with other forms of reducing data.

Figure 1.

Data deduplication

978-1-5225-7335-7.ch010.f01

Presently, data deduplication technique is broadly utilized by different commercially known cloud storage providers i.e. Dropbox, Amazon S3, Mozy, Google Drive, Memopal etc. to save a lot of storage space and maintenance cost (Akhila, Ganesh & Sunitha, 2016; Di Pietro & Sorniotti, 2012; Harnik, Pinkas, & Shulman-Peleg, 2010).

Top

Workflow Of Data Deduplication

Data deduplication is accomplished in three major steps: chunking, fingerprinting and indexing of fingerprints. The workflow of data deduplication is shown in figure 2. Chunking partitions the input data file stream into small blocks known as chunks. There are mainly two distinct approaches to divide these input data file stream into chunks: fixed size chunking and variable size chunking. The deduplication detection ratio of variable-size chunks is better than fixed-size chunks.

Figure 2.

Workflow of data deduplication

978-1-5225-7335-7.ch010.f02

After the process of chunking, some cryptographically secure hash signature (SHA-1, SHA-256, MD5) is applied on chunks to calculate their fingerprints .The process is referred as fingerprinting. There must be a unique Fingerprint value for each chunk. Only unique chunks are placed on the disk after verifying their uniqueness using their fingerprints and redundant chunks are only referenced with old chunks. Indexing is a way to organize fingerprints on disk. Numerous strategies have been suggested for on-disk index-lookup process but their search time and running cost are high. Indexing must be fast enough so that it can check the uniqueness of any fingerprint in least time (Singhal, Kaushik & Sharma, 2018).

Data deduplication is primarily a strategy essentially used for improving the storage space by removing the identical content of data and confirming that only one unique instance of the data content is actually reserved on the storage server i.e. Cloud Storage system. The identical data is replaced with a pointer to the unique copy of data.

Key Terms in this Chapter

Message-Locked Encryption: A cryptographic primitive where the key under which encryption and decryption are performed is itself derived from the message.

Integrity: Protecting the information from being modified by unauthorized parties.

Confidentiality: Protecting the information from disclosure to unauthorized parties.

Homomorphic Encryption: A cryptographic method that allows performing mathematical calculations on encrypted information (cipher text) without decrypting it first.

Availability: Ensuring that authorized parties are able to access the information when needed.

Convergent Encryption: A cryptosystem which generates identical ciphertext from identical plaintext files.

Complete Chapter List

Search this Book:
Reset