Temporal and Contextual Evaluation of Background Knowledge Discovery for Short Text Classification

Temporal and Contextual Evaluation of Background Knowledge Discovery for Short Text Classification

Isak Taksa (Baruch College, City University of New York, New York, NY, USA), Sarah Zelikovitz (The College of Staten Island, City University of New York, Staten Island, NY, USA) and Amanda Spink (Loughborough University, Loughborough, UK)
DOI: 10.4018/IJOCI.2012070103
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Background Knowledge (BK) plays an essential role in machine learning for short-text and non-topical classification. In this paper the authors present and evaluate two Information Retrieval techniques used to assemble four sets of BK in the past seven years. These sets were applied to classify a commercial corpus of search queries by the apparent age of the user. Temporal and contextual evaluations were used to examine results of various classification scenarios providing insight into choice, significance and range of tuning parameters. The evaluations also demonstrated the impact of the dynamic Web collection on classification results, and the advantages of Automatic Query Expansion (AQE) vs. basic search. The authors discuss other results of this research and its implications on the advancement of short text classification.
Article Preview

Introduction

Text classification in the framework of machine learning is an active area of research, encompassing a variety of learning algorithms (Silver et al., 2013), classification systems (Battula & Prasad, 2013) and data representations (Rossi et al., 2012). Three non-standard issues in machine learning are the focus of the research in this chapter: short text classification problems, limited labeled data, and non-topical classification. This paper examines the classification of search queries, which is one example of text classification that is particularly complex and challenging. Typically, search queries are short, reveal very few features per single query and are therefore a weak source for traditional machine learning (Gabrilovich at el., 2009).

We examine the issues of non-hierarchical (Dib & Carbone, 2012) classification and investigate a method that combines limited manual labeling, computational linguistics and information retrieval to classify a large collection of search queries. We discuss classification proficiency of the proposed method on a large search engine query log, and the effect of the variations of this method on the quality and efficiency of short-text classification.

We executed two sets of classification tasks (see Figure 1). The first set (Temporal) consisted of three classification tasks executed in December 2006, November 2009, and July 2013 (in further discussions we will refer to these tasks as Task-6, Task-9, and Task-13). These three tasks were to examine the impact of the growing internet collection on quality and consistency of classification results. In all three tasks we used original search queries to search the Web using Google search engine. The search results were used to create the background knowledge for further query log classification.

Figure 1.

Classification sets/tasks

The second set (Contextual) consisted of two classification tasks, both executed in July 2013. The first task is the same as Task-13 above, while the second was modified to incorporate Automatic Query Expansion (Task-AQE). It is established knowledge that search results of any retrieval depend largely on the quality of the search query (Haiduc et al., 2013;Carpineto & Romano, 2012). The search queries in the original log are short and to improve the relevance of search results and subsequently the relevance of the background knowledge we used snippets. Snippets are Google generated page title and description which we used to discover extra search terms to expand the original log search queries. The last two tasks were executed concurrently to analyze the impact of automated query expansion on quality of background knowledge.

We start with a search engine query log which is viewed as a set of textual data on which we perform classification (Ortiz-Cordova & Jansen, 2012; Zimmer & Spink, 2008). Observed in this way, each query in a log can be seen as a document that is to be classified according to some pre-defined set of labels, or classes. Viewing the initial log with the search queries as a document corpus D = {d1, d2,…di,...dn}, we create a set of classes that indicate a personal demographic characteristic of the searcher, C = {c1, c2,…cj,...cm}. Using Web searches, our approach retrieves a set of background knowledge to learn additional features that are indicative of the classes, C. This allows for the categorization of the queries. This approach consists of the following five steps:

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing