Topic Modeling for Web Community Discovery

Topic Modeling for Web Community Discovery

Kulwadee Somboonviwat (King Mongkut’s Institute of Technology Ladkrabang (KMITL), Thailand)
Copyright: © 2013 |Pages: 18
DOI: 10.4018/978-1-4666-2806-9.ch005
OnDemand PDF Download:
No Current Special Offers


The proliferation of the Web has led to the simultaneous explosive growth of both textual and link information. Many techniques have been developed to cope with this information explosion phenomenon. Early efforts include the development of non-Bayesian Web community discovery methods that exploit only link information to identify groups of topical coherent Web pages. Most non-Bayesian methods produce hard clustering results and cannot provide semantic interpretation. Recently, there has been growing interest in applying Bayesian-based approaches to discovering Web community. The Bayesian approaches for Web community discovery possess many good characteristics such as soft clustering results and ability to provide semantic interpretation of the extracted communities. This chapter presents a systematic survey and discussions of non-Bayesian and Bayesian-based approaches to the Web community discovery problem.
Chapter Preview


In recent years, the World Wide Web has become a popular platform for disseminating and searching for information. Due to the explosive growth of the Web, the low precision of Web search engine, and the lack of a data model for the Web data, it is increasingly difficult for the users to search for and access the desired information. Motivated by this problem, a lot of research has been done to discover the implicit communities of topically related Web pages or Web communities (e.g. Gibson, et al., 1998; Kumar, et al., 1999; Flake, et al., 2000). The Web communities provide invaluable, reliable, timely, and up-to-date topic specific information resources for users interested in them. Furthermore, a set of extracted Web communities can be used as a key building block in Web applications and value added services such as focused crawling, Web portals, Web search ranking, Web spamming detection, Web recommendation, and Web personalization (such as Flake, et al., 2000; Pierrakos, et al., 2003; Otsuka, et al., 2004; Li, et al., 2010).

Conceptually, a Web community is defined as a set of Web pages on a specific topic created by people sharing the same interests. The Web community usually manifests itself as a subgraph with dense connections and coherent content. Most work on Web community discovery focused on the efficient detection of community structure based purely on link information between Web pages using non-Bayesian approaches such as spectral methods, graph partitioning, and clustering (e.g. Kumar, et al., 1999; Flake, et al., 2000; Toyoda & Kitsuregawa, 2001). These non-Bayesian link based methods suffer from the lack of semantic interpretation (most implementation uses a simple top-k most frequent keyword to summarize a topic of a Web community). Furthermore, most non-Bayesian approaches for Web community discovery do not allow a Web page to be assigned to more than one community.

On the other hand, probabilistic topic models (e.g. Blei, Ng, & Jordan, 2003; Griffiths & Steyvers, 2002, 2003, 2004; Hofmann, 1999, 2001) have recently gained much popularity as a suite of algorithmic tools to help organizing, searching, and understanding large collections of text documents. The key idea underlying these models is that a document is generated from a probabilistic process, and consists of multiple topics, where a topic is a probability distribution over words taken from a fixed set of vocabularies. This representation naturally captures the hidden topical structure in text, and can then be used in text mining tasks to discover topics from a large text collection.

With the explosive growth of the Web and linked data sets, some recent work on topic modeling has extended the basic topic models by taking into consideration the link structure information. The work in this area can be classified into five directions. The first line of work (e.g. PHITS-PLSA by Cohn & Hofmann, 2001; LDA-Link-Word by Erosheva, et al., 2004; Link-PLSA-LDA by Nallapati, et al., 2008) incorporates the notion of link information into the document generative model. The second line of work (relational or supervised topic models) models textual content and link separately by representing the link between documents as a binary random variable conditioned on their content (e.g. Chang & Blei, 2009). The third line of work regularizes topic models with a discrete regularizer defined based on the link structure of the data set (e.g. NetPLSI by Mei, et al., 2008). The fourth line of work (e.g. iTopicModel by Sun, et al., 2009) model the relationship between documents using a multivariate Markov Random Field (MRF). Lastly, Yang et al. (2009) proposed the PCL model, which is a discriminative model for combining link and content information for community detection.

Complete Chapter List

Search this Book: