On Knowledge-Enhanced Document Clustering

On Knowledge-Enhanced Document Clustering

Manjeet Rege, Josan Koruthu, Reynold Bailey
Copyright: © 2012 |Pages: 11
DOI: 10.4018/ijirr.2012070105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Document clustering plays an important role in text analytics by finding natural groupings of documents based on their similarity determined by the words appearing in them. Many of the clustering algorithms accessible through various text analytics tools are completely unsupervised in nature. That is, they are unable to incorporate any domain knowledge that might be available about the documents to improve the clustering accuracy and relevance. The authors present a graph partitioning based semi-supervised document clustering algorithm. The user provides knowledge about few of the documents in the form of “must-link” and “cannot-link” constraints between pairs of documents. A “must-link” constraint between two documents expresses the fact that the user feels that the two corresponding documents must be clustered irrespective of their dissimilarity. Similarly, a “cannot-link” signifies that the two documents should never be clustered together no matter how similar they might happen to be. These constraints are then incorporated into a graph partitioning based into a computationally efficient document clustering algorithm. Through experiments performed on publicly available text datasets, the proposed framework is validated.
Article Preview
Top

1. Introduction

In text analytics (Srivastava & Sahami 2009), document clustering refers to the problem of automatically grouping documents into different groups (known as clusters), such that documents in one cluster are similar to each other while being dissimilar from the ones in a different cluster. Typically, the dataset is represented using the vector model in which a set of m documents with n unique words form an m x n document-word matrix (Baeza-Yates & Ribeiro-Neto, 2011). An entry ij in the matrix denotes the frequency of the word j in document i.

Document clustering methods can be categorized into two approaches, viz., partitional and hierarchical clustering. Partitional or flat clustering directly divides the set of documents into different clusters. Hierarchical clustering on the other hand creates a tree-like structure from the documents. Nodes at different levels of the tree constitute clustering at that level. The hierarchical agglomerative clustering (HAC) combines clusters from lower levels into clusters at higher levels (Croft, 1977, Willett, 1988). For example, four clusters at a level are combined to get two clusters at a level above it. The similarity between clusters is first computed to decide which clusters should be combined together. Variants of HAC based on how the cluster similarity is computed have been proposed such as the ones based on single-linkage, complete-linkage, and group average linkage. The top-most level of the tree has one cluster consisting of all documents while the lower-most level comprises of each document belonging to a cluster of its own. Document clustering with a desired number of clusters can be obtained by extracting the clusters from an appropriate level in the tree.

Amongst the partitional document clustering approaches, one of the most popular methods is to apply k-means clustering (Jain, 2010) to the set of documents, which minimizes the sum of squared distances between documents and cluster centroids. Other approaches such as Naive Bayes or Gaussian mixture model define a probabilistic model over the document corpus and attempt to fit the model using maximum likelihood (Baker & McCallum, 1998, Liu et al., 2002). The problem with these approaches is that they make a strong assumption about the documents. For instance, k-means assumes compactness of data, Naive Bayes assumes independence of words, while Gaussian model assumes that the underlying data distribution is normal. These assumptions might not necessarily hold true and hence limit the applicability of these approaches. There have also been methods proposed based on non-negative matrix factorization (NMF) (Xu et al., 2003, Huang et al., 2011) where the given data matrix is factorized into other matrices to yield document clusters. NMF based methods are iterative in nature, need to be run many times, and might not be applicable to realistic datasets.

As can be seen from the above brief discussion, although a lot of attention has been devoted to document clustering, all of these methods are completely unsupervised in nature. That is, they are unable to incorporate any domain knowledge that might be available to guide the clustering process. A publishing company interested in clustering of books might have information on a few books that should always be clustered together (e.g. two children’s books), and some other books that should never be clustered together (e.g. a book for mature audience and children’s book) regardless of the similarity measure. An unsupervised document clustering algorithm clusters documents purely based on the similarity between them as defined by the previously mentioned vector model. Two documents meant for different kinds of audience may get clustered together as they might have common set of words. Similarly, two documents that are meant to be clustered together might never get clustered together due to dissimilarity between words appearing in them. From a text analytics perspective, it is very important for a document clustering algorithm to incorporate any domain knowledge that might be available from the experts.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing