An Automatic Blocking Keys Selection For Efficient Record Linkage

An Automatic Blocking Keys Selection For Efficient Record Linkage

Hamid Naceur Benkhlaed, Djamal Berrabah, Nassima Dif, Faouzi Boufares
Copyright: © 2021 |Pages: 18
DOI: 10.4018/IJOCI.2021010104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

One of the important processes in the data quality field is record linkage (RL). RL (also known as entity resolution) is the process of detecting duplicates that refer to the same real-world entity in one or more datasets. The most critical step during the RL process is blocking, which reduces the quadratic complexity of the process by dividing the data into a set of blocks. By that way, matching is done only between the records in the same block. However, selecting the best blocking keys to divide the data is a hard task, and in most cases, it's done by a domain expert. In this paper, a novel unsupervised approach for an automatic blocking key selection is proposed. This approach is based on the recently proposed meta-heuristic bald eagles search (bes) optimization algorithm, where the problem is treated as a feature selection case. The obtained results from experiments on real-world datasets showed the efficiency of the proposition where the BES for feature selection outperformed existed approaches in the literature and returned the best blocking keys.
Article Preview
Top

1. Introduction

In recent years, the world is witnessing a massive explosion in the volume of data. Specifically, after the adoption of smartphones and social Media which generates a huge amount of data in a daily manner. Organizations around the world found themselves in the need of integrating their own data coming from various sources in different formats. This data have to be integrated in order to facilitate the process of data analyzing and extracting useful information out of it. However, data integration can become a very time-consuming process due to data quality problems, such as duplicates values, missing values, and referential integrity issues. The stakeholders are now more aware of the importance of data quality. A lot of money is invested in order to improve the quality of the stored data.

Record Linkage (RL) is one of the most important tasks in the data quality field. RL is defined as the process of identifying the records that represent the same real-world entity during the merge of different data sources. When the RL process is executed on a single database, it can be referred to as the deduplication process (Sarawagi and Bhamidipaty.2002). Recently, the RL process was exploited in several domains for multiple goals such as privacy-preserving, removing duplicates from bibliographic citations, prices comparison and fraud detection.

The best way to detect all the tuples that refer to the same real-world entity is to compare each one in the dataset to all the others. However, with the case of a very large dataset, the Cartesian product could end-up with an unacceptable number of comparisons. For example, applying the RL process on database A and B, each one contains 2 million records will end-up by doing 4 billion matching operations, which is not reasonable.

To overcome this problem, the RL community proposed the blocking technique. Blocking consists of dividing the data into a set of blocks. In a way that all the records in the same block share a similar value called “The Blocking Key Value” (BKV). With this technique, matching between the records is done only on the records that are in the same block.

Generally, Record Linkage based on blocking consists of three main steps: Data standardization, Blocking and Matching (Christen.2012) (Figure 1). Data standardization is the process of standardizing the dataset attributes that may represent the same information but with different identification in each one. For example, the sex attribute can be found in one dataset as a binary value (0/1) and in another one as (M/F). So standardizing the data is a very important preprocessing step. The next step is blocking, where one of the blocking techniques that exist in the literature (will be discussed in the next sections) is used to divide the data into a set of blocks. For example, using the standard blocking approach, all the records that have the same address are grouped in one block. After that, matching the records of the same block is done.

Figure 1.

Record Linkage's phases

IJOCI.2021010104.f01

Two important parameters control the performance of a good blocking technique. The first one is the blocking key value. A blocking key can be formed using one field (attribute) or a concatenation of several parts from a set of fields. For example, a BKV can be formed using the First-Name value or it can be formed by the concatenation of the first three characters from the First-Name field and the ZIP code from the address field. Table 1 shows an example of blocking keys generating from the restaurant dataset. Two blocking keys were generated. The first one (BK1) is the Soundex phonetic encoding of the restaurant’s name concatenated with the phone number. The second one (BK2) is the NYSIIS phonetic encoding of the restaurant’s name concatenated with the address number.

Table 1.
A Blocking keys example from the restaurant dataset
BK1BK2NameAddressCityPhoneType
A6553102461501LASANG435arnie morton's of Chicago435 s. la cienega blv.Los Angeles310/246-1501American
H3413104721211STADAC12224art's deli12224 ventura boldStudio city818-762-1221delis

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022)
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing