A Machine Learning Approach to Data Cleaning in Databases and Data Warehouses

A Machine Learning Approach to Data Cleaning in Databases and Data Warehouses

Hamid Haidarian Shahri
DOI: 10.4018/978-1-60566-232-9.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Entity resolution (also known as duplicate elimination) is an important part of the data cleaning process, especially in data integration and warehousing, where data are gathered from distributed and inconsistent sources. Learnable string similarity measures are an active area of research in the entity resolution problem. Our proposed framework builds upon our earlier work on entity resolution, in which fuzzy rules and membership functions are defined by the user. Here, we exploit neuro-fuzzy modeling for the first time to produce a unique adaptive framework for entity resolution, which automatically learns and adapts to the specific notion of similarity at a meta-level. This framework encompasses many of the previous work on trainable and domain-specific similarity measures. Employing fuzzy inference, it removes the repetitive task of hard-coding a program based on a schema, which is usually required in previous approaches. In addition, our extensible framework is very flexible for the end user. Hence, it can be utilized in the production of an intelligent tool to increase the quality and accuracy of data.
Chapter Preview
Top

Introduction

The problems of data quality and data cleaning are inevitable in data integration from distributed operational databases and online transaction processing (OLTP) systems (Rahm & Do, 2000). This is due to the lack of a unified set of standards spanning over all the distributed sources. One of the most challenging and resource-intensive phases of data cleaning is the removal of fuzzy duplicate records. Considering the possibility of a large number of records to be examined, the removal requires many comparisons and the comparisons demand a complex matching process.

The term fuzzy duplicates is used for tuples that are somehow different, but describe the same real-world entity, that is, different syntaxes but the same semantic. Duplicate elimination (also known as entity resolution) is applicable in any database, but critical in data integration and analytical processing domains, where accurate reports and statistics are required. The data cleaning task by itself can be considered as a variant of data mining. Moreover, in data mining and knowledge discovery applications, cleaning is required before any useful knowledge can be extracted from data. Other application domains of entity resolution include data warehouses, especially for dimension tables, online analytical processing (OLAP) applications, decision support systems, on-demand (lazy) Web-based information integration systems, Web search engines, and numerous others. Therefore, an adaptive and flexible approach to detect the duplicates can be utilized as a tool in many database applications.

When data are gathered form distributed sources, differences between tuples are generally caused by four categories of problems in data, namely, the data are incomplete, incorrect, incomprehensible, or inconsistent. Some examples of the discrepancies are spelling errors; abbreviations; missing fields; inconsistent formats; invalid, wrong, or unknown codes; word transposition; and so forth as demonstrated using sample tuples in Table 1.

Table 1.
Examples of various discrepancies in database tuples
Discrepancy ProblemNameAddressPhone NumberID NumberGender
John DowLucent Laboratories615 5544553066Male
Spelling ErrorsJohn DoeLucent Laboratories615 5544553066Male
AbbreviationsJ. DowLucent Lab.615 5544553066Male
Missing FieldsJohn Dow-615 5544-Male
Inconsistent FormatsJohn DowLucent Laboratories(021)61555445530661
Word TranspositionDow JohnLucent Laboratories615 5544553066Male

Complete Chapter List

Search this Book:
Reset