Query Processing Based on Entity Resolution

Query Processing Based on Entity Resolution

Copyright: © 2014 |Pages: 55
DOI: 10.4018/978-1-4666-5198-2.ch013
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

Dirty data exist in many systems. Efficient and effective management of dirty data is in demand. Since data cleaning may result in useful data lost and new dirty data, this research attempts to manage dirty data without cleaning and retrieve query result according to the quality requirement of users. Since entity is the unit for understanding objects in the world and many dirty data are led by different descriptions of the same real-world entity, this chapter defines the entity data model to manage dirty data and then it proposes EntityManager, a dirty data management system with entity as the basic unit, keeping conflicts in data as uncertain attributes. Even though the query language is SQL, the query in the system has different semantics on dirty data. To process queries efficiently, this research proposes novel index, data operator implementation, and query optimization algorithms for the system.
Chapter Preview
Top

Introduction

Data quality has been addressed in different areas, such as statistics, management science, and computer science(Batini & Scannapieca, 2006). Dirty data is the main reason to cause data quality. Many surveys reveal that dirty data exist in most database systems. For example, a survey (Raman, DeHoratius & Ton, 2001) reports that over 65% of the inventory records at retailer Gamma were inaccurate at the store-SKU level. The consequences of dirty data may be severe. Having uncertain, duplicate or inconsistent dirty data leads to ineffective marketing, operational inefficiencies, inferior customer relationship management, and poor business decisions. For example, it is reported (English, 1997) that dirty data in retail databases alone costs US consumers $2.5 billion a year. Therefore, several techniques have been developed to process dirty data to reduce the harm of dirty data.

Existing work on processing dirty data can be divided into two broad categories. The first category is data cleaning (Rahm & Do, 2000), which is to detect and remove errors and inconsistencies from data to improve data quality. However, data cleaning cannot clean the dirty data exhaustively and excessive. It may lead to the loss of information. Besides this, existing data cleaning techniques are generally time-consuming. Especially when the massive data is updated frequently, frequent data cleaning operation will greatly affect the efficiency of the system. Therefore, some researchers propose algorithms in the other category, to perform queries on dirty data directly and obtain query results with clean degree from the dirty data(Andritsos, Fuxman & Miller, 2006; Fuxman & Miller, 2005; Fuxman, Fazli & Miller, 2005).

Several models for dirty data management without data cleaning have been proposed (Boulos et al., 2005; Hassanzadeh & Miller, 2009; Widom, 2004). But most of these models only consider the uncertainty in values of the attributes and the quality degree of the data, without the consideration of the entities in real world and their relationships. In this paper, we focus on entity-based relational database model in which one tuple represents an entity. This model can better reflect the real world entities and their relationships.

In applications, the different representations of the same real-word entities often lead to inconsistent data, uncertain data or duplicate data, especially when multiple data sources need to be integrated (Dong, Halevy & Yu, 2009; Lenzerini, 2002). In the entity-based relational database, for the duplicate data referring to the same real-world entity, we combine these data, and for inconsistent data (or uncertain data), we endow each of them a value (we call it as quality degree) which reflects its quality. Example 1 shows this process.

  • Example 1: Consider a fragment of the dirty data shown in Table 1. In this table, we can easily identify that tuples 1, 3 and 6 refer to the same entity in the real world even though their representations are different. By preforming entity resolution and combining these three tuples, we can get one entity tuple. In this process, we don’t remove any data, because we are not completely sure which value is the correct (or real) one. By this table, we can only assume the value of the attribute “Name” is more likely to be “Wal-Mart”, but we can’t deny the value “Mal-Mart” completely. So, in entity-based relational database, we reserve all possible values of attributes, which implies that the value of one attribute in a tuple may be uncertain, and it may contain multiple values. We endow possible each attribute value with a quality degree in accordance with their proportion, as shown in Table 2. In tuples 1, 3 and 6, the value “Wal-Mart” appeases twice, so the quality degree is 2/3 ≈ 0.67. Similarly, other quality degrees can be given. Then we get an entity tuple as shown in Table 2.

Table 1.
A dirty data fragment
IDNameCityZipcodePhnReprsnt
1Wal-MartBeijing9001580103389Sham
2CarrefourHarbin2001680374832Morgan
3Wal-MartBJ90015010-80103389Sham
4WalmartHarbin2004070937485Sham
5CarrefourBeijing9001583950321Morgan
6Mal-MartBeijing9001580103389Sham

Complete Chapter List

Search this Book:
Reset