Automatic Text Classification from Labeled and Unlabeled Data

Automatic Text Classification from Labeled and Unlabeled Data

Eric P. Jiang
DOI: 10.4018/978-1-4666-1806-0.ch013
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Automatic text classification is a process that applies information retrieval technology and machine learning algorithms to build models from pre-labeled training samples and then deploys the models to previously unseen documents for classification. Text classification has been widely applied in many fields ranging from Web page indexing, document filtering, and information security, to business intelligence mining. This chapter presents a semi-supervised text classification framework that is based on the radial basis function (RBF) neural networks. The framework integrates an Expectation Maximization (EM) process into a RBF network and can learn for classification effectively from a very small quantity of labeled training samples and a large pool of additional unlabeled documents. The effectiveness of the framework is demonstrated and confirmed by some experiments of the framework on two popular text classification corpora.
Chapter Preview
Top

Introduction

Automatic text classification refers to the process of applying information retrieval technology and machine learning algorithms to build models from pre-labeled training samples and deploy the models to previously unseen documents for classification. With the rapid growth of the Internet and advances of computer technology, more textual documents than ever before have been digitized, and digital libraries and encyclopedias have become increasingly valuable information resources. Recently, Google announced (Google, 2010) that, as part of the Google Books project that started in 2004, they have successfully scanned more than 15 million books from more than 100 countries in over 400 languages. Text classification has been widely applied in many areas that include Web indexing, document filtering and management, information security, business and marketing intelligence mining, and customer service automation. Text classification has played and will continue to play an important role in this digital phenomenon.

Over the years, a number of machine learning algorithms have been successfully used in building text classification models (Sebastiani, 2002). Among these algorithms, naïve Bayes (Sahami et al., 1998), nearest neighbor (Aha & Albert, 1991), decision trees with boosting (Schapier et al., 2000), SVM (Christianini & Shawe-Taylor, 2000) are the most cited. As one of the most popular supervised learning applications, the algorithms require a pre-labeled training dataset and, in general, the quantity and quality of the training data can have an impact on model classification effectiveness. Specifically, given a sufficient number of training samples, this supervised modeling process can produce reasonably good classification results. However, it may perform inadequately when there is only a limited number of labeled training data on hand. In many real-world applications, hand-labeling a large quantity of textual data could be labor-intensive or extremely difficult. For instance, text classification can be used to develop personal software agents that automatically capture, filter and route relevant Web news articles to individual online readers based on their collected reading interests. Such products would likely require a few hundred or more user-labeled training articles in order to achieve acceptable accuracy (Nigam et al., 2000). Although this particular labeling task is doable, it can still be very tedious, undesirable and time-consuming. Web page categorization is another example in this context. Given the rapid proliferation of online information and its dynamic nature, accurately categorizing all available Web pages is indeed an invaluable yet very challenging task. But any attempt to classify this gigantic information database, even it is limited only to some specific topics, would need a set of labeled training pages with the size that might be too large to be manually accomplishable.

In the last few years, there has been surging interest in developing semi-supervised learning models that are capable of learning from both labeled training samples and additional (relevant) unlabeled data. The semi-supervised learning paradigm is particularly pertinent to many text classification problems where labeled training samples are limited in supply while unlabeled relevant documents are abundantly available. This chapter presents a semi-supervised text classification framework that integrates a clustering based Expectation Maximization (EM) process into the radial basis function (RBF) neural networks. The framework can learn for classification effectively from a very small number of labeled samples and a large quantity of additional unlabeled documents. Briefly, the framework first trains a RBF network by applying a clustering algorithm to both labeled and unlabeled data iteratively for computing the RBF middle layer network parameters, and then by using a regression model for determining the RBF output layer network weights. It uses the known class labels in the training data to guide the clustering process and can also apply a weighting scheme on unlabeled data to balance predictive values between labeled and unlabeled data. This balanced use of both labeled and unlabeled data helps improve classification accuracy.

Complete Chapter List

Search this Book:
Reset