Improving the Quality of Linked Data Using Statistical Distributions

Improving the Quality of Linked Data Using Statistical Distributions

Heiko Paulheim (University of Mannheim, Germany) and Christian Bizer (University of Mannheim, Germany)
DOI: 10.4018/978-1-5225-5191-1.ch074

Abstract

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.
Chapter Preview
Top

Data Quality Issues With Noisy And Incomplete Linked Data Sets

Data quality is not a single measure, but has multiple dimensions. Pipino et al. (2002) list several of those dimensions, ranging from accessibility to completeness. In addition, many of those dimensions cannot be assessed in a context-free manner, but depend on the task at hand, such as relevance. Thus, data quality is generally conceived as “fitness for use” (Wang et al., 1996), i.e., the capability of data to fit the requirements of a specific user given a certain use case.

Complete Chapter List

Search this Book:
Reset