An Efficient Algorithm for Data Cleaning

An Efficient Algorithm for Data Cleaning

Payal Pahwa (Guru Gobind Singh IndraPrastha University, India), Rajiv Arora (Guru Gobind Singh IndraPrastha University, India) and Garima Thakur (Guru Gobind Singh IndraPrastha University, India)
DOI: 10.4018/978-1-4666-1873-2.ch017
OnDemand PDF Download:
$37.50

Abstract

The quality of real world data that is being fed into a data warehouse is a major concern of today. As the data comes from a variety of sources before loading the data in the data warehouse, it must be checked for errors and anomalies. There may be exact duplicate records or approximate duplicate records in the source data. The presence of incorrect or inconsistent data can significantly distort the results of analyses, often negating the potential benefits of information-driven approaches. This paper addresses issues related to detection and correction of such duplicate records. Also, it analyzes data quality and various factors that degrade it. A brief analysis of existing work is discussed, pointing out its major limitations. Thus, a new framework is proposed that is an improvement over the existing technique.
Chapter Preview
Top

Introduction

A process of transforming data into information and making it available to users in a timely manner is called Data warehousing.

A data warehouse is a central repository of an organization's electronically stored data (http://en.wikipedia.org). Our approach focuses on the identification of approximate duplicate records before loading them in the data warehouse. Hence, we present a brief overview of various sources of errors that arise due to machine or human intervention (Hernandez & Stolfo, 1995, 1998).

Complete Chapter List

Search this Book:
Reset