Elementary: Large-Scale Knowledge-Base Construction via Machine Learning and Statistical Inference

Elementary: Large-Scale Knowledge-Base Construction via Machine Learning and Statistical Inference

Feng Niu (Computer Sciences Department, University of Wisconsin-Madison, USA), Ce Zhang (Computer Sciences Department, University of Wisconsin-Madison, USA), Christopher Ré (Computer Sciences Department, University of Wisconsin-Madison, USA) and Jude Shavlik (Computer Sciences Department, University of Wisconsin-Madison, USA)
Copyright: © 2012 |Pages: 32
DOI: 10.4018/jswis.2012070103

Abstract

Researchers have approached knowledge-base construction (KBC) with a wide range of data resources and techniques. The authors present Elementary, a prototype KBC system that is able to combine diverse resources and different KBC techniques via machine learning and statistical inference to construct knowledge bases. Using Elementary, they have implemented a solution to the TAC-KBP challenge with quality comparable to the state of the art, as well as an end-to-end online demonstration that automatically and continuously enriches Wikipedia with structured data by reading millions of webpages on a daily basis. The authors describe several challenges and their solutions in designing, implementing, and deploying Elementary. In particular, the authors first describe the conceptual framework and architecture of Elementary to integrate different data resources and KBC techniques in a principled manner. They then discuss how they address scalability challenges to enable Web-scale deployment. The authors empirically show that this decomposition-based inference approach achieves higher performance than prior inference approaches. To validate the effectiveness of Elementary’s approach to KBC, they experimentally show that its ability to incorporate diverse signals has positive impacts on KBC quality.
Article Preview

Introduction

Knowledge-base construction (KBC) is the process of populating a knowledge base (KB) with facts (or assertions) extracted from text. It has recently received tremendous interest from academia (Weikum & Theobald, 2010), e.g., CMU's NELL (Carlson, Betteridge, Kisiel, Settles, Hruschka, & Mitchell, 2010; Lao, Mitchell, & Cohen, 2011), MPI's YAGO (Kasneci, Ramanath, Suchanek, & Weikum, 2008; Nakashole, Theobald, & Weikum, 2011), and from industry (Fang, Sarma, Yu, & Bohannon, 2011), e.g., IBM's DeepQA (Ferrucci et al., 2010) and Microsoft's EntityCube (Zhu, Nie, Liu, Zhang, & Wen, 2009). To construct high-quality knowledge bases from text, researchers have considered a wide range of data resources and techniques; e.g., pattern matching with dictionaries listing entity names (Riloff, 1993), bootstrapping from existing knowledge bases like Freebase and YAGO (Suchanek, Kasneci, & Weikum, 2007), disambiguation using web links and search results (Hoffart, Yosef, Bordino, Fürstenau, Pin, Spaniol, ... Weikum, 2011; Dredze, McNamee, Rao, Gerber, & Finin, 2010), rule-based extraction with regular expressions curated by domain experts (Derose, Shen, Fei, Lee, Burdick, Doan, & Ramakrishnan, 2007; Chiticariu, Krishnamurthy, Li, Raghavan, Reiss, & Vaithyanathan, 2010), training statistical models with annotated text (Lafferty, McCallum, & Pereira, 2001), etc. All these resources are valuable because they are complementary in terms of cost, quality, and coverage; ideally one would like to be able to use them all. To take advantage of different kinds of data resources, a major problem that KBC systems face is coping with imperfect or conflicting information from multiple sources (Weikum & Theobald, 2010). (We use the term “information” to refer to both data and algorithms that can be used for a KBC task.) To address this issue, several recent KBC projects (Carlson et al., 2010; Kasneci et al., 2008; Nakashole et al., 2011; Zhu et al., 2009; Lao et al., 2011) use statistical inference to combine different data resources.

Motivated by the above observation, we present Elementary, a prototype system that aims to enable quick development and scalable deployment of KBC systems that combine diverse data resources and best-of-breed algorithms via machine learning and statistical inference (Niu, 2012). This article provides an overview of the motivation and advantages of the Elementary architecture, while only briefly touching on individual technical challenges that are addressed in our other publications. We structure our presentation around two main challenges that we face in designing, implementing, and deploying Elementary: (1) how to integrate con〉icting information from multiple sources for KBC in a principled way, and (2) how to scale Elementary for Web-scale KBC.

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 15: 4 Issues (2019): Forthcoming, Available for Pre-Order
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing