Analytical Performance of Modified One-Way Hash Algorithm for Data Integrity Verification in Cloud Computing

Analytical Performance of Modified One-Way Hash Algorithm for Data Integrity Verification in Cloud Computing

Meena Kumari, Rajender Nath
Copyright: © 2018 |Pages: 11
DOI: 10.4018/IJGC.2018070102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Cloud computing is a revolution in the IT industry due to its characteristics of scalability, efficiency, and availability. Along with these benefits, cloud computing comes with certain security issues that a user has to take into consideration. Security of data and its integrity verification in the cloud are major issues that act as a barrier to the adoption of cloud computing. In the authors' previously published work, a one-way hash function was proposed to verify the data integrity at the cloud storage site. This article further extends prior work. In this article, the modified one-way hash algorithm is implemented by using a Hadoop distributed file system (HDFS) service of Hadoop environment and analytical testing is performed to ascertain its performance. Statistical and experimental results reveal the proposed algorithm is robust, which ensures data integrity and fulfills mostly all essential features of a secure hash function.
Article Preview
Top

1. Introduction

With advancements in the field of distributed computing, Cloud computing has become increasingly popular as a powerful parallel data processing model (Kumari, 2017). Due to security risks associated with cloud computing, enterprises hesitate its adoption. There are always chances of loss of consistency and integrity of data when it is stored at the remote server. In order to alleviate this security concern, hashing techniques are used (Kumari, 2016). A hash function converts bit strings of indiscriminate, determinate length into bit strings of preset length. A message x having arbitrary size is taken as input by a hash function and after certain operations on it, the output produced is termed as a message digest h(x). The generated message digest is of fixed size and is having a size much less than that of the original message. A hash function must have following security properties:

  • 1.

    Pre-image Resistance (One-Way): Which states that for a given digest h(x) of an input x, it is infeasible to find x from h(x);

  • 2.

    Second-Pre-image Resistance (Weak Collision Resistance): Which states that for a given input x, it is computationally infeasible to find x'≠ x such that h(x) = h(x');

  • 3.

    Collision Resistance (Strong Collision Resistance): Which states that it is computationally infeasible to discover any two distinct inputs x and x' such that h(x) = h(x').

Collision resistance infers second pre-image resistance but does not infer pre-image resistance. Second pre-image resistance blocks an attacker from constructing a message having the same digest as a message the attacker cannot regulate whereas collision resistance limits an attacker from crafting two separate messages having similar digests.

The major issues of accuracy and integrity of the data being archived on the distributed cloud servers must be tackled by a certain security measure. Indeed, any measure should meet all the requirements of a secure and robust security mechanism and also have efficient performance. This section summarizes the author’s previous work well cited in (Kumari, 2018), in which a one-way hash algorithm was proposed which embrace all the above-mentioned properties. The algorithm works in three phases namely File Preprocessing, Singular key matrix generation and Final Hash generation phase. In the first phase, bits are padded in user’s file if needed, so that file size become divisible by block size and expressed into square matrices form. Then each character in the matrix is represented into its ASCII equivalent. During the second phase, a matrix of dimension similar to block matrix dimension is populated with random values and converted into a singular matrix. In the course of the final phase, intermediate hashes were generated by multiplying each block matrix one by one with the singular key matrix. Ultimately, the final digest is calculated by utilizing binary Exclusive OR operation on the intermediate hashes.

The modified one-way hash algorithm proposed in this paper is implemented on HDFS which enables the feature of data storage at distributed clusters and permits data access to multiple users at the same time. The proposed algorithm is examined against existing hash algorithms. This has been proved experimentally that the proposed hash algorithm not only provide an effective means for verification of data integrity in cloud computing environment but also is robust and satisfies almost all the essential features of a secure hash function.

The rest of the paper is organized as follows: Section 2 describes the related work, Section 3 defines the problem formulation and Section 4 presents the proposed one-way hash algorithm. Section 5 demonstrates underlying algorithms implementation with experimental study. Section 6 exhibits its performance evaluation and results followed by concluding remarks and future research perspectives in Section 7.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 10: 1 Issue (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 1 Issue (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing