Efficient Algorithms for Cleaning and Indexing of Graph data

Efficient Algorithms for Cleaning and Indexing of Graph data

Santhosh Kumar D. K. (Canara Engineering College, India) and Demain Antony DMello (Canara Engineering College, Visvesvaraya Technological University (VTU), Belagavi, India)
Copyright: © 2020 |Pages: 19
DOI: 10.4018/IJOSSP.2020070101
OnDemand PDF Download:
No Current Special Offers


Information extraction and analysis from the enormous graph data is expanding rapidly. From the survey, it is observed that 80% of researchers spend more than 40% of their project time in data cleaning. This signifies a huge need for data cleaning. Due to the characteristics of big data, the storage and retrieval is another major concern and is addressed by data indexing. The existing data cleaning techniques try to clean the graph data based on information like structural attributes and event log sequences. The cleaning of graph data on a single piece of information alone will not increase the performance of computation. Along with node, the label can also be inconsistent, so it is highly desirable to clean both to improve the performance. This paper addresses aforesaid issue by proposing graph data cleaning algorithm to detect the unstructured information along with inconsistent labeling and clean the data by applying rules and verify based on data inconsistency. The authors propose an indexing algorithm based on CSS-tree to build an efficient and scalable graph indexing on top of Hadoop.
Article Preview

1. Introduction

Big Data is a highly discussed phrase in the last decade. Due to the digitalization, social networks, IoT, Healthcare, automation, etc., The data is produced at a high rate, variety and volume; such data is categorized into five major types. A category called Data Store which categorizes the data based on how the data is stored. It comes with three types of data stores 1) Document oriented, 2) Column-oriented, and 3) Graph data (Santhosh Kumar D K and Demian Antony D’Mello, 2020). In many situations, the data generated from the earlier mentioned sources, are in numerical form or converted to numerical form. Such data is recorded in a matrix, and is used as a parameter to a predictive model. These data have relationships that can be captured in graphs as a collection vertices and edges i.e., G (V, E). This leads to many challenges, creates opportunities, and a need for graph data analytics or graph processing models (Kalashnikov and Mehrotra, 2006).

The graph data is collected from the varieties of sources, which are not in the form to analyze or feed to algorithms for analysis. Before analyzing the graph data, the essential part is to construct graph data and store it for further processing. To construct the graph data first extract and integrate data from multiple sources like social networks, Web pages, etc. Then, prepare the graph data. The quality of input drives the effectiveness of computing models in deriving accurate outcomes. Otherwise, the complete model works for the “garbage-in, garbage-out” principal, the dirty data results in unpredictable effects on the outcome of the model (Gani et al., 2016). The cleansing can be done either manually or by ad-hoc routines, because cleaning is purely done on the interest in data that may vary from one model to another model (Heidari et al., 2018). It's evident from the survey of last decades that more than 40% of the project time is spent by more than 80% of the researchers working on data analysis, the cleaning and preparation of the data (Kalashnikov and Mehrotra, 2006).

Graphs are being used to quickly link various kinds of associated information which have made graphs pervasive with high volume and diversity. Such a massive collection of graphs with billions of vertices(entities) and edges (relationships) generated by social networks and Web leads to the open challenges of managing and processing the graph data (Zomaya and Sakr, 2017). The graph data drives to the development of new powerful frameworks with a highly distributed parallel graph processing model. Examples of graph data are Social networks, customer interactions with enterprises network, biological networks, bibliographic citation networks. Graph data poses several challenges for suitable implementations to meet the requirements (Heidari et al., 2018).

For examining big data, effective handling of the indexing strategy needs to be sketched. Current technologies fall off the vicinity of indexing big data; they are not entirely constructed to inspect such grouped, distributed, and multi-sized data scaled from terabytes to petabytes (Mittal, 2017). Indexing is employed in big data to accomplish retrieval from large, complicated sets of data with scalable, dispersed storage in the cloud. It is illogical to execute and manually explore such records. A practical, indexing technique with high-throughput would optimize the operation of executing data queries. Hence, effective indexing strategies are necessary for efficiently accessing big data (Douze, Sablayrolles and Jegou, 2018).

Complete Article List

Search this Journal:
Open Access Articles
Volume 13: 4 Issues (2022): Forthcoming, Available for Pre-Order
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 1 Issue (2015)
Volume 5: 3 Issues (2014)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing