Article Preview
TopIntroduction
Text classification in the framework of machine learning is an active area of research, encompassing a variety of learning algorithms (Silver et al., 2013), classification systems (Battula & Prasad, 2013) and data representations (Rossi et al., 2012). Three non-standard issues in machine learning are the focus of the research in this chapter: short text classification problems, limited labeled data, and non-topical classification. This paper examines the classification of search queries, which is one example of text classification that is particularly complex and challenging. Typically, search queries are short, reveal very few features per single query and are therefore a weak source for traditional machine learning (Gabrilovich at el., 2009).
We examine the issues of non-hierarchical (Dib & Carbone, 2012) classification and investigate a method that combines limited manual labeling, computational linguistics and information retrieval to classify a large collection of search queries. We discuss classification proficiency of the proposed method on a large search engine query log, and the effect of the variations of this method on the quality and efficiency of short-text classification.
We executed two sets of classification tasks (see Figure 1). The first set (Temporal) consisted of three classification tasks executed in December 2006, November 2009, and July 2013 (in further discussions we will refer to these tasks as Task-6, Task-9, and Task-13). These three tasks were to examine the impact of the growing internet collection on quality and consistency of classification results. In all three tasks we used original search queries to search the Web using Google search engine. The search results were used to create the background knowledge for further query log classification.
Figure 1. Classification sets/tasks
The second set (Contextual) consisted of two classification tasks, both executed in July 2013. The first task is the same as Task-13 above, while the second was modified to incorporate Automatic Query Expansion (Task-AQE). It is established knowledge that search results of any retrieval depend largely on the quality of the search query (Haiduc et al., 2013;Carpineto & Romano, 2012). The search queries in the original log are short and to improve the relevance of search results and subsequently the relevance of the background knowledge we used snippets. Snippets are Google generated page title and description which we used to discover extra search terms to expand the original log search queries. The last two tasks were executed concurrently to analyze the impact of automated query expansion on quality of background knowledge.
We start with a search engine query log which is viewed as a set of textual data on which we perform classification (Ortiz-Cordova & Jansen, 2012; Zimmer & Spink, 2008). Observed in this way, each query in a log can be seen as a document that is to be classified according to some pre-defined set of labels, or classes. Viewing the initial log with the search queries as a document corpus D = {d1, d2,…di,...dn}, we create a set of classes that indicate a personal demographic characteristic of the searcher, C = {c1, c2,…cj,...cm}. Using Web searches, our approach retrieves a set of background knowledge to learn additional features that are indicative of the classes, C. This allows for the categorization of the queries. This approach consists of the following five steps: