Dynamic Clustering Based on Minimum Spanning Tree and Context Similarity for Enhancing Document Classification

Dynamic Clustering Based on Minimum Spanning Tree and Context Similarity for Enhancing Document Classification

Anirban Chakrabarty (Department of Computer Applications, Future Institute of Engineering and Management, Sonarpur, Kolkata, India) and Sudipta Roy (Triguna Sen School of Information Technology, Assam Central University, Silchar, Assam, India)
Copyright: © 2014 |Pages: 15
DOI: 10.4018/ijirr.2014010103


Document Classification is the task of assigning a text document to one or more predefined categories according to its content and the labeled training samples. Traditional classification schemes use all training samples for classification, thereby increasing storage requirements and calculation complexity as the number of features increase. Moreover, the commonly used classification techniques consider the number of categories is known in advance, this may not be so in actual reality. In the practical scenario, it is very much essential to find the number of clusters for unknown dataset dynamically. Identifying these limitations, the proposed work evolves a text clustering algorithm where clusters are generated dynamically based on minimum spanning tree incorporating semantic features. The proposed model can efficiently find the significant matching concepts between documents and can perform multi category classification. The formal analysis is supported by applications to email and cancer data sets. The cluster quality and accuracy values were compared with some of the widely used text clustering techniques which showed the efficiency of the proposed approach.
Article Preview


In the digital era, the rapid growth in the volume of text documents available from various sources like Internet, digital libraries, medical records has spurred users to effectively retrieve, navigate, and organize information. The ultimate goal is to help users to search what they are looking for effortlessly and take decisions suitably. In this context, fast and high-quality document clustering algorithms play a major role. Most of the common techniques in text retrieval are based on the statistical analysis of terms i.e. words or phrases. Such text retrieval methods are based on the vector space model (VSM) which is a widely used data representation. The VSM represents each document as a feature vector of the terms in the form of term frequency or term weight (Salton et al., 1975). The similarity between documents is measured by one of the several similarity measures that are based on feature vector. Examples include the cosine measure and the Jaccard measure (Schaeffer, 2007). Metric distances such as Euclidean distance are not appropriate for high dimension and sparse domains. Most conventional measures estimate the surface overlap between documents based on the words they mention and ignore deeper semantic connections. To achieve a more accurate analysis, the underlying model should indicate the semantics of text. Conceptual information retrieval extracts information by processing the document on semantic level forming a concept base and then retrieves relative information to provide search results.

Clustering methods can be used to automatically group the retrieved documents into a list of meaningful categories. Methods used for text clustering include decision trees, contextual clustering, clustering based on data summarization, statistical analysis, neural nets and rule-based systems among others (Nahm & Mooney, 2000; L. Talavera & J. Bejar, 2001; Jin et al., 2005). Most clustering techniques consider the number of clusters is fixed which can result in poor quality clustering. Dynamic document clustering is the process of inserting the newly arrived documents to the appropriate existing cluster so it is not required to relocate clusters, thus time and effort taken for clustering is drastically reduced(Wang et al., 2011; Nadig et al., 2008). Figure 1 shown below depicts a model for dynamic document clustering.

Figure 1.

A model for dynamic document clustering


A spanning tree is an acyclic sub graph of a graph G, which contains all vertices from G and is also a tree. The minimum spanning tree (MST) of a weighted graph is the minimum weight spanning tree of that graph (Edla & Jana, 2013). MST clustering algorithm is known to be capable of detecting clusters with irregular boundaries. Moreover MST is relatively insensitive to small amounts of noise spread over the field (Zahn, 1971).Thus the shape of a cluster boundary has little impact on the performance of the algorithm. The proposed approach does not require a preset number of clusters. Edges that satisfy a predefined inconsistency measure are removed from the tree. The process is iterated until there is a change in the edge list and all data are clustered.

The paper suggests a context based retrieval method at the sentence, document and corpus levels for enhancing the quality of text retrieval. More specifically, it can quantify how closely concepts relate to each other and integrate this into a document similarity measure. As a result, documents do not have to mention the same words to be judged similar. The suggested clustering technique is applied on two different data sets for developing clusters - email messages and cancer data sets to demonstrate its feasibility. A major contribution of this work is in developing clusters dynamically based on area of interest of email users and when applied to cancer data sets it can classify patients to different treatment clusters based on age groups. The work introduces a text classification algorithm which allows incremental and multi label classification by comparing with the context pool i.e. the most significant concepts in the cluster.

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 11: 4 Issues (2021): Forthcoming, Available for Pre-Order
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing