Data Quality in Data Warehouses

Data Quality in Data Warehouses

William E. Winkler (U.S. Bureau of the Census, USA)
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-60566-010-3.ch086
OnDemand PDF Download:
$37.50

Abstract

Fayyad and Uthursamy (2002) have stated that the majority of the work (representing months or years) in creating a data warehouse is in cleaning up duplicates and resolving other anomalies. This paper provides an overview of two methods for improving quality. The first is record linkage for finding duplicates within files or across files. The second is edit/imputation for maintaining business rules and for filling-in missing data. The fastest record linkage methods are suitable for files with hundreds of millions of records (Winkler, 2004a, 2008). The fastest edit/imputation methods are suitable for files with millions of records (Winkler, 2004b, 2007a).
Chapter Preview
Top

Main Thrust Of The Chapter

This section provides an overview of record linkage and of statistical data editing and imputation. The cleaning-up and homogenizing of the files are pre-processing steps prior to data mining.

Complete Chapter List

Search this Book:
Reset