Collective Entity Disambiguation Based on Hierarchical Semantic Similarity

Collective Entity Disambiguation Based on Hierarchical Semantic Similarity

Bingjing Jia, Hu Yang, Bin Wu, Ying Xing
Copyright: © 2020 |Pages: 17
DOI: 10.4018/IJDWM.2020040101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Entity disambiguation involves mapping mentions in texts to the corresponding entities in a given knowledge base. Most previous approaches were based on handcrafted features and failed to capture semantic information over multiple granularities. For accurately disambiguating entities, various information aspects of mentions and entities should be used in. This article proposes a hierarchical semantic similarity model to find important clues related to mentions and entities based on multiple sources of information, such as contexts of the mentions, entity descriptions and categories. This model can effectively measure the semantic matching between mentions and target entities. Global features are also added, including prior popularity and global coherence, to improve the performance. In order to verify the effect of hierarchical semantic similarity model combined with global features, named HSSMGF, experiments were carried out on five publicly available benchmark datasets. Results demonstrate the proposed method is very effective in the case that documents have more mentions.
Article Preview
Top

1. Introduction

Due to the prosperity of the web, a large number of unstructured texts have emerged to represent web content. These texts contain many mentions which are names of people, places, and organizations, etc. Unfortunately, one mention may have several meanings without considering the context (Shen et al., 2014). For example, given a sentence ''Donald Trump has arrived in Washington ahead of his inauguration as the 45th President of the United States.'' The mention ''Washington'' may refer to the capital of the United States, the first President of the United States, or a football club in Washington. It is helpful for understanding sentences finding the real-world entities which mentions refer to. This process is called entity disambiguation (ED). Many researchers regard entries in a knowledge base (KB) as surrogates for real world entities. Therefore, the main purpose of ED is to link mentions in text to corresponding entities in a KB such as Wikipedia. ED is an essential step in knowledge discovery by combining unstructured with structured data, and is beneficial to many applications, including knowledge discovery, question answering and knowledge base population. In this paper, we target at disambiguating entities by a new neural network method.

Traditional ED methods usually take into account the context of a mention in the text, e.g., ''inauguration'' helps to comprehend that ''Washington'' is related to the capital of the United States ''Washington, D.C.''. Some researchers have explored many methods to model the context information such as vector space model (Mendes et al., 2011), TF-IDF vector and topic feature model (Ratinov et al., 2011). The similarity between a mention and an entity can be measured based on these features. The entity with highest similarity score is considered as the most possible result. However, hand-crafted features are insufficient to capture the semantic information embedded in the context because it relies on domain knowledge. Neural network approaches have been attempting to learn the context representation without any manual design efforts and have achieved promising results. For example, the input is the entire document and the internal structure is added into the context representation by Stacked Denoising Auto-encoders (He et al., 2013). However, they could not find the important information from context words and only take consideration of some features of the mention or entity. In addition, existing approaches do not jointly consider multiple mentions in the same document (Sun et al., 2015). In fact, candidate entities within the same document are highly related. In this paper, we design a new collective entity disambiguation method based on hierarchical semantic similarity model (HSSM) and global features, named HSSMGF, to overcome the above-mentioned shortcomings. First, HSSM uses attention mechanism to choose important information from multiple information sources. Second, the selected information is merged, and attention mechanism is reapplied to generate hierarchical representations of mentions and entities. Third, global features, including prior popularity and global coherence, are utilized to improve the results of ED. To reduce computational cost, all other mentions in the same document needn't to be considered when disambiguating a mention. Instead, the coherence of entities can be computed simply based on unambiguous mentions with less noises. HSSMGF not only makes use of all the available information for mentions and entities but is robust when missing supporting information. Our main contributions are illustrated as follows.

  • We present a hierarchical semantic similarity model that generates hierarchical representations of mentions and entities. Our model is designed to fully utilize different kinds of information about mentions and entities to capture semantic similarity. Attention mechanism is also used to pick up important information which is more relevant to the mention or entity, and it can improve semantic matching.

  • The semantic similarities between mentions and entities are combined with global features. The proposed method is a simple and effective collective ED algorithm which balances the influence of all features to achieve the best result.

  • To estimate the effect of HSSMGF, experimental studies are conducted over five publicly available datasets. The experimental results demonstrate that HSSMGF is superior to the state-of-the-art methods in most cases. Some visualization cases demonstrate the interpretability of our method.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing