Combining Diverse Knowledge Based Features for Semantic Relatedness Measures

Combining Diverse Knowledge Based Features for Semantic Relatedness Measures

Anna Lisa Gentile, Ziqi Zhang, Fabio Ciravegna
DOI: 10.4018/978-1-60960-881-1.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter proposes a novel Semantic Relatedness (SR) measure that exploits diverse features extracted from a knowledge resource. Computing SR is a crucial technique for many complex Natural Language Processing (NLP) as well as Semantic Web related tasks. Typically, semantic relatedness measures only make use of limited number of features without considering diverse feature sets or understanding the different contributions of features to the accuracy of a method. This chapter proposes a random graph walk model based method that naturally combines diverse features extracted from a knowledge resource in a balanced way in the task of computing semantic relatedness. A set of experiments is carefully designed to investigate the effects of choosing different features and altering their weights on the accuracy of the system. Next, using the derived feature sets and feature weights we evaluate the proposed method against the state-of-the-art semantic relatedness measures, and show that it obtains higher accuracy on many benchmarking datasets. Additionally, the authors justify the usefulness of the proposed method in a practical NLP task, i.e. Named Entity Disambiguation.
Chapter Preview
Top

Introduction

Semantic relatedness quantifies how much two terms or concepts are related by encompassing all kinds of relations between them, such as hypernymy, hyponymy, antonymy and functional relations. State-of-the-art semantic relatedness measures can be roughly divided into two mainstreams. The first makes use of distribution of terms or co-occurrence statistics observed in a large corpus (Bollegala, et al., 2007; R. Cilibrasi & Vitányi, 2007; Matsuo, et al., 2006; Sahami & Heilman, 2006). These methods are usually referred to as “statistic-based”. The second mainstream is constituted by “knowledge-based” methods, which employ structural and lexical features extracted from certain knowledge resources, such as WordNet, Wiktionary and Wikipedia. Among these, WordNet has been extensively used in this field (Banerjee & Pedersen, 2003; Hughes & Ramage, 2007; Leacock & Chodorow, 1998; Resnik, 1995a) but also criticized by its lack of coverage for named entities and specialized concepts which are crucial to domain-specific problems (Bollegala, et al., 2007; Strube & Ponzetto, 2006). With the increasing popularity of using collaborative knowledge resources (Zesch, et al., 2008a) in NLP, Wiktionary has been proposed as an alternative to WordNet, often achieving better results (Müller & Gurevych, 2009; Weale, et al., 2009; Zesch, et al., 2008b). Unfortunately, being a similar word-based knowledge resource to WordNet, Wiktionary does not overcome the limitation that it has little or no coverage of specialized concepts or named entities, which may hinder its application to domain-specific NLP tasks. In contrast, a major alternative collaborative knowledge source such as Wikipedia contains rich structural and lexical knowledge about entities and concepts (Kazama & Torisawa, 2007). Such knowledge has proved useful features for computing semantic relatedness. For this reason, it has attracted increasing attention from researchers of SR (Gabrilovich & Markovitch, 2007; Hassan & Mihalcea, 2009; Strube & Ponzetto, 2006; Zesch, et al., 2008b).

However, state-of-the-art methods typically employ one or two types of structural elements and information content extracted from the knowledge resources. Although this reduces the level of complication in the methods, experiences from other information extraction tasks such as Named Entity Recognition (Grishman & Sundheim, 1996) and relation extraction (Giuliano, et al., 2006) suggest that combining multiple and mutually exclusive features can lead to improved performance. With this motivation, we believe that combining diverse features extracted from the knowledge resource in a balanced way can further improve the accuracy of semantic relatedness systems. To validate this hypothesis, we propose a novel SR method that naturally integrates diverse features weighted according to their importance for the task, and arrives at a single measure of relatedness between terms or concepts. In our studies we choose Wikipedia as the knowledge resource because of its broader coverage of concepts and richer lexical and structural knowledge than WordNet and Wiktionary; also because higher accuracy has been achieved when similar methods are tested with Wikipedia rather than WordNet. We extract six different types of lexical and structural knowledge and feed them as features to the random graph walk algorithm. The same method can be adapted to the usage of different resources rather than Wikipedia, by defining the kind of feature to extract according to the different resource. The random graph walk model is chosen for its robustness in dealing with multiple types of features (Iria, et al., 2007) and the representation of features in a natural and semantic manner, which facilitates the studies of feature effects. However, other models can be used, such as the cosine similarity.

Complete Chapter List

Search this Book:
Reset