Positive Unlabelled Learning for Document Classification

Positive Unlabelled Learning for Document Classification

Xiao-Li Li
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-60566-010-3.ch238
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In traditional supervised learning, a large number of labeled positive and negative examples are typically required to learn an accurate classifier. However, in practice, it is costly to obtain the class labels for large sets of training examples, and oftentimes the negative examples are lacking. Such practical considerations motivate the development of a new set of classification algorithms that can learn from a set of labeled positive examples P augmented with a set of unlabeled examples U (which contains both hidden positive and hidden negative examples). That is, we want to build a classifier using P and U in the absence of negative examples to classify the data in U as well as future test data. This problem is called the Positive Unlabelled learning problem or PU learning problem. For instance, a computer scientist may want to build an up-to-date repository of machine learning (ML) papers. Ideally, one can start with an initial set of ML papers (e.g., a personal collection) and then use it to find other ML papers from related online journals or conference series, e.g., Artificial Intelligence journal, AAAI (National Conference on Artificial Intelligence), IJCAI (International Joint Conferences on Artificial Intelligence), SIGIR (ACM Conference on Research & Development on Information Retrieval), and KDD (ACM International Conference on Knowledge Discovery and Data Mining) etc. With the enormous volume of text documents on the Web, Internet news feeds, and digital libraries, finding those documents that are related to one’s interest can be a real challenge. In the application above, the class of documents that one is interested in is called the positive documents (i.e. the ML papers in the online sources). The set of known positive documents are represented as P (namely, the initial personal collection of ML papers). The unlabelled set U (papers from AAAI Proceedings etc) contains two groups of documents. One group contains documents of class P, which are the hidden positive documents in U (e.g., the ML papers in an AAAI Proceedings). The other group, which comprises the rest of the documents in U, are the negative documents (e.g., the non-ML papers in an AAAI Proceedings) since they do not belong to positive class. Given a positive set P, PU learning aims to identify a particular class P of documents from U or classify the future test set into positive and negative classes. Note that collecting unlabeled documents is normally easy and inexpensive, especially those involving online sources.
Chapter Preview
Top

Background

A theoretical study of PAC learning from positive and unlabeled examples under the statistical query model was first reported in (Denis, 1998). It assumes that the proportion of positive instances in the unlabeled set is known. Letouzey et al. (Letouzey, Denis, & Gilleron, 2000; Denis, Gilleron, & Letouzey, 2005) presented a learning algorithm based on a modified decision tree algorithm in this model. Muggleton (Muggleton, 1997) followed by studying the problem in a Bayesian framework where the distribution of functions and examples are assumed known. (Liu, Lee, Yu, & Li, 2002) reported sample complexity results and provided theoretical elaborations on how the problem may be solved.

A number of practical algorithms, S-EM, PEBL and Roc-SVM (Liu et al., 2002; Yu, Han, & Chang, 2002; Li & Liu, 2003) have also been proposed. They all conformed to the theoretical results in (Liu et al., 2002), following a two-step strategy: (1) identifying a set of reliable negative documents RN from the unlabeled set (called strong negative documents in PEBL), and (2) building a classifier using P (positive set), RN (negative set) and U-RN (unlabelled set) through applying an existing learning algorithm (such as EM or SVM) once or iteratively. Their specific differences are described in the next subsection “Existing Techniques S-EM, PEBL and Roc-SVM”.

Complete Chapter List

Search this Book:
Reset