Estimation of a Priori Decision Threshold for Collocations Extraction: An Empirical Study

Estimation of a Priori Decision Threshold for Collocations Extraction: An Empirical Study

Fethi Fkih, Mohamed Nazih Omri
DOI: 10.4018/ijitwe.2013070103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Choosing the optimal threshold for the collocations extraction remains a manual task performed by experts. Until today, there is no serious work, based on deep studies, which explores possible solutions to automate the learning of the threshold in the statistical terminology field. In this paper, the authors try to spotlight on this problem by exploring, firstly, the evaluation performance techniques used in several scientific areas (such as biomedical and biometric) and applying them, subsequently, on the statistical terminology field. The experimental study gives promoters results. First, it shows the effectiveness of usual techniques (such as ROC and Precision-Recall curves) used to evaluate the performance of binary classification systems. Second, it provides a practical solution for automatic estimation of optimal thresholds for collocation extraction systems.
Article Preview
Top

1. Introduction

Textual Data occupy the majority of the amount of information that reside in the web and in the document databases. Indeed, the textual content is considered the primary source of several types of information (e.g.: named entities, terms, collocations, etc.) which are useful in several applications (indexing, machine translation, documents classification, construction of linguistic resources, Lexicography, Parsing, etc.).

The automatic extraction of relevant knowledge from text documents is a difficult task. In fact, it requires the mixing between different research disciplines such as: Natural Language Processing, Knowledge Engineering, Information Retrieval, Artificial Intelligence, etc.

In this paper we will focus on the extraction of a particular kind of knowledge, i.e. word collocations, which are characterized by specific linguistic and statistical properties. According to Manning and Schütze (1999), they can be characterized by three linguistic properties:

  • Limited Compositionality: The meaning of the collocation is not a composition of the meanings of its parts. For example, the meaning of the collocation “strong tea” is different from the composition of the meaning of “strong” and the meaning of “tea”.

  • Limited Substitutability: We cannot substitute a part of a collocation by her synonym. For example, “strong” in “strong tea” cannot be substituted by “muscular”.

  • Limited Modifiability: Many collocations cannot be supplemented by additional words. For example, the collocation “to kick the bucket” cannot be supplemented as “to kick the {red/plastic/water} bucket” (Wermter & Hahn 2004).

Indeed, the extraction of collocations requires two main tasks: recognizing interesting collocations in the text; classifying them according to classes predefined by the expert. Firth (1957) asserts that “you shall know a word by the company it keeps”. In this perspective, the techniques used for the collocations extraction are often based on the calculation of the joint frequency of a pair of words within a sliding window of fixed size (Church et al., 1989). In practice, the joint frequency is used to calculate a score that measures the attachment force between two words in a given text. If this force exceeds a threshold fixed a priori, we can judge in this case that the couple can form a pertinent collocation.

The collocation extraction problem can be seen as a binary classification problem. Collocations will be classified by the system into two classes: relevant and irrelevant. This classification depends mainly on two parameters: the statistical value used to weight collocations, and the threshold value used for the discrimination. Estimation of the discrimination threshold is a problem very known in several scientific fields (such as: signal processing, image processing, information retrieval, etc.). In fact we can find in the literature a wide range of machine learning techniques for the prediction of the threshold as, among other things, Bayesian networks (Gustafson et al., 2009), Perona-Malik Model (Shao & Zou, 2009), genetic algorithm (Lia et al., 2012), etc..

Like other areas of knowledge, the choice of the ideal threshold is considered as a problem in the terminology extraction field. However, we don’t find in the literature any exact rules to justify this choice. So, in this paper we try to shed light on the threshold determination problem by exploring, first, the techniques used in several scientific areas (such as biomedical and biometric) and applying them on the statistical terminology field. Our approach is mainly based on statistical techniques of performance measurement of binary classification systems, namely, ROC curves, Precision-Rappel curves and Accuracy curves.

The remainder of this paper is structured as follows. Section 2 presents the theoretical basis of the statistical approach for collocations extraction. Then, we identify the main measures used to evaluate the performance of binary classifiers. Finally we conclude with an exposition of the obtained results.

Complete Article List

Search this Journal:
Reset
Volume 19: 1 Issue (2024)
Volume 18: 1 Issue (2023)
Volume 17: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 16: 4 Issues (2021)
Volume 15: 4 Issues (2020)
Volume 14: 4 Issues (2019)
Volume 13: 4 Issues (2018)
Volume 12: 4 Issues (2017)
Volume 11: 4 Issues (2016)
Volume 10: 4 Issues (2015)
Volume 9: 4 Issues (2014)
Volume 8: 4 Issues (2013)
Volume 7: 4 Issues (2012)
Volume 6: 4 Issues (2011)
Volume 5: 4 Issues (2010)
Volume 4: 4 Issues (2009)
Volume 3: 4 Issues (2008)
Volume 2: 4 Issues (2007)
Volume 1: 4 Issues (2006)
View Complete Journal Contents Listing