Query Based Topic Modeling: An Information-Theoretic Framework for Semantic Analysis in Large-Scale Collections

Query Based Topic Modeling: An Information-Theoretic Framework for Semantic Analysis in Large-Scale Collections

Eduardo H. Ramírez (Tecnológico de Monterrey, México) and Ramón F. Brena (Tecnológico de Monterrey, México)
DOI: 10.4018/978-1-60960-881-1.ch004
OnDemand PDF Download:
List Price: $37.50


Finally, in order to compare the QTM results with models generated by other methods we have developed probabilistic metrics that formalize the notion of semantic coherence using probabilistic concepts and can be used to validate overlapping and incomplete clustering using multi-labeled corpora. They show that the proposed method can produce models of comparable, or even superior quality, than those produced with state of the art probabilistic methods.
Chapter Preview


Now more than ever, many activities of life, work and business depend on the information stored on massive, fast-growing document collections like the World-Wide Web. However, despite the impressive advances in Web retrieval technologies, searching the web or browsing extensive repositories are not simple tasks for many users. Very often users pose ineffective queries and need to reformulate them to better express their intent, unfortunately, producing effective query reformulations to achieve an information goal is not always a straightforward task (Broder, 2002),(Jansen, Booth, & Spink, 2009).

On the other hand, when queries get longer or users lack enough domain knowledge, it turns more likely that query terms may differ from the terms in the documents, thus making relevant documents less likely to be retrieved. This problem has been characterized by Furnas et. al (1987), as the “vocabulary problem” or the “term mismatch problem” and is a consequence of synonymy in language. Currently, in order to deal with synonymy, the most skilled search users need to infer the words that may appear on their relevant documents and try different query variations using equivalent terms and expressions.

A number of solutions to the “term-mismatch” and the ambiguity problems have been reported in the literature since the late 80’s. One of the ultimate motivations and long-term goal of many of such developments, including the one presented in this work, is to evolve retrieval technologies from lexical matching towards semantic matching, that is, being able to retrieve documents that do not necessarily include the query terms but solve the information need.

One of the first solutions that were proposed under this line of thought was Latent Semantic Indexing (Deerwester, Dumais, Landauer, Furnas, & Harshman, 1990) (LSI). They proposed a vectorial representation of words and documents and used linear algebra to create a spatial representation in which documents with similar terms appear close to each other. As LSI was criticized for lacking a theoretical foundation, Hofmann (1999) proposed a probabilistic version of LSI, namely Probabilistic Latent Semantic Indexing (PLSI). PLSI and all subsequent methods like Latent Dirichlet Allocation (Blei, Ng, & Jordan, 2003) (LDA), work under the assumption that a document can be modeled as a mixture of hidden topics, and that those topics can be modeled as probability distributions over words. Then, some sort of parameter estimation algorithm (e.g. maximum likelihood estimation) is applied to the observed data to learn the parameters of the hidden topics. Authors like Griffiths and Steyvers have characterized this family of works as Probabilistic Topic Models (Griffiths, Steyvers, & Tenenbaum, 2007).

The idea of modeling collections based on its topics and representing each topic as a probability distribution of terms is central to state of the art approaches; besides, it provides additional benefits versus spatial representations. Also, it has been shown to be a good idea; by using probabilistic topic modeling methods it is possible to improve access to information in collections in different application scenarios, such as retrieval (R. M. Li, Kaptein, Hiemstra, & Kamps, 2008), (Wei & Croft, 2006), or collection browsing (Blei & Lafferty, 2007). So, on the basis of such evidence, we may confidently state that creating a topic model of the collection is a necessary step towards more adaptive search engines and applications. However, due to its high computational complexity the applicability of probabilistic topic modeling methods remains limited on large corpus.

Therefore, in the aim of making semantic modeling feasible on large-scale collections, in this work we propose the Query-Based Topic Modeling framework (QTM), an alternative topic modeling method based on a simplified representation of topics as freely overlapping sets or clusters of semantically similar documents. By simplifying the notion of topic, the problem of Probabilistic Topic Modeling can be reformulated as one of “Discrete Topic Modeling” and essentially transforming it into an overlapping clustering problem, thus making possible to take advantage of a broad array of clustering techniques existing in the literature.

Complete Chapter List

Search this Book: