Query Expansion based on Central Tendency and PRF for Monolingual Retrieval

Query Expansion based on Central Tendency and PRF for Monolingual Retrieval

Rekha Vaidyanathan, Sujoy Das, Namita Srivastava
Copyright: © 2016 |Pages: 21
DOI: 10.4018/IJIRR.2016100103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Query Expansion is the process of selecting relevant words that are closest in meaning and context to that of the keyword(s) of query. In this paper, a statistical method of automatically selecting contextually related words for expansion, after identifying a pattern in their score, is proposed. Words appearing in top 10 relevant document is given a score w.r.t partitions they appear in. Proposed statistical method, identifies a pattern of central tendency in the high scores and selects the right group of words for query expansion. The objective of the method is to keep the expanded query with minimum words (light), and still give statistically significant MAP values compared to the original query. Experimental results show 17-21% improvement of MAP over the original unexpanded query as baseline but achieves a performance similar to that of the state of the art query expansion models - Bo1 and KL. FIRE 2011 Adhoc English and Hindi data for 50 topics each were used for experiments with Terrier as the Retrieval Engine.
Article Preview
Top

Introduction

Query Expansion is the process of selecting relevant words that are closest in meaning and context to that of the keyword in query. It overcomes the problem of word mismatch, where different words are used in query and documents to describe the same concept (Xu & Croft, 1996). Query Expansion is a successful technique in most of the cases but largely depends on the variation in retrieval performance of queries (Amati, Carpineto & Romano, 2004). One of the most popular techniques is Pseudo Relevance Feedback, where the user submits a short query and from the initial set of retrieved results, an expanded query is reformulated. The expanded query contains terms from the initially retrieved documents that closely match with the query words: synonyms, plurals, modifiers etc. (Jones, Rey, Madani, & Greiner, 2006). Originally, query expansion is performed on the PRF information extracted from top N documents- selected from an initial search, on the same collection where the target documents are in (Evans & Lefferts 1994). Pseudo Relevance Feedback method is fully automatic compared to explicit feedback (Farah, 2009) as it does not require any user input, thus making it more attractive (Wu, Zhang, Zhou & Huang, 2010; Buckley, Salton, Allan & Singhal, 1995; Yu, Cai, Wen, & Ma, 2003). As this is a fully automated process this method can hurt or improve the query. Recent experiments show that with large query sets (with 50 queries) significant improvements are shown in overall performance (Mitra, Singhal & Buckley, 1998).

Automatically identifying expansion terms from the documents is a challenging task. A popular technique in IR is to give weights to the words and defining them in a vector space. It is used for Relevance Feedback and a classical model was proposed by Rocchio to find Text similarity and identifying relevant and non-relevant documents (Rocchio & Salton, 1971). Other methods for relevance feedback cite contextual and word similarity modeled as co-occurrence (Kilgarriff, Rychly, Smrz & Tugwell, 2004; Matsuo & Ishizuka, 2004), frequency estimates (Terra & Clarke, 2003) etc. Among the term weighting methods, Term frequency and inverse document frequency is regarded as an empirical method with several possible variations (Aizawa, 2003). Studies related included frequency of words (Luhn, 1957), using inverse document frequency as term specifity (Spark-Jones, 1972) tf.idf and its variations (Salton & Buckley, 1988). More recent studies included, the tf-rf scheme for text categorization (Lan, Tan, Low & Sung, 2005), a local relevance scheme (Wu, Luk, Wong, & Kwok, 2008), and tf.idf weighting based on the length of the query (Paik, 2013), to cite a few. In this paper, a variation of the tf.idf is applied to derive a score for the words. It is assigned to the words in initially retrieved partitioned document instead of whole document, after applying Pseudo Relevance Feedback (PRF).

The measure tf.idf is extensively used in text retrieval due to its robustness (Robertson, 2004). In the basic formula of tf.idf, the measure of term specificity called the inverse document frequency (IDF, proposed by Karen Spärck Jones in 1972) is based on the number of documents containing the word. Thus idf is calculated for words across documents in a collection.

IJIRR.2016100103.m01
(1) Where, t = term in query; D = total number of documents in a collection;

IJIRR.2016100103.m02 = number of documents where term t occurred.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing