Article Preview
TopIntroduction
Knowledge-base construction (KBC) is the process of populating a knowledge base (KB) with facts (or assertions) extracted from text. It has recently received tremendous interest from academia (Weikum & Theobald, 2010), e.g., CMU's NELL (Carlson, Betteridge, Kisiel, Settles, Hruschka, & Mitchell, 2010; Lao, Mitchell, & Cohen, 2011), MPI's YAGO (Kasneci, Ramanath, Suchanek, & Weikum, 2008; Nakashole, Theobald, & Weikum, 2011), and from industry (Fang, Sarma, Yu, & Bohannon, 2011), e.g., IBM's DeepQA (Ferrucci et al., 2010) and Microsoft's EntityCube (Zhu, Nie, Liu, Zhang, & Wen, 2009). To construct high-quality knowledge bases from text, researchers have considered a wide range of data resources and techniques; e.g., pattern matching with dictionaries listing entity names (Riloff, 1993), bootstrapping from existing knowledge bases like Freebase and YAGO (Suchanek, Kasneci, & Weikum, 2007), disambiguation using web links and search results (Hoffart, Yosef, Bordino, Fürstenau, Pin, Spaniol, ... Weikum, 2011; Dredze, McNamee, Rao, Gerber, & Finin, 2010), rule-based extraction with regular expressions curated by domain experts (Derose, Shen, Fei, Lee, Burdick, Doan, & Ramakrishnan, 2007; Chiticariu, Krishnamurthy, Li, Raghavan, Reiss, & Vaithyanathan, 2010), training statistical models with annotated text (Lafferty, McCallum, & Pereira, 2001), etc. All these resources are valuable because they are complementary in terms of cost, quality, and coverage; ideally one would like to be able to use them all. To take advantage of different kinds of data resources, a major problem that KBC systems face is coping with imperfect or conflicting information from multiple sources (Weikum & Theobald, 2010). (We use the term “information” to refer to both data and algorithms that can be used for a KBC task.) To address this issue, several recent KBC projects (Carlson et al., 2010; Kasneci et al., 2008; Nakashole et al., 2011; Zhu et al., 2009; Lao et al., 2011) use statistical inference to combine different data resources.
Motivated by the above observation, we present Elementary, a prototype system that aims to enable quick development and scalable deployment of KBC systems that combine diverse data resources and best-of-breed algorithms via machine learning and statistical inference (Niu, 2012). This article provides an overview of the motivation and advantages of the Elementary architecture, while only briefly touching on individual technical challenges that are addressed in our other publications. We structure our presentation around two main challenges that we face in designing, implementing, and deploying Elementary: (1) how to integrate con〉icting information from multiple sources for KBC in a principled way, and (2) how to scale Elementary for Web-scale KBC.