Active Learning with Multiple Views

Active Learning with Multiple Views

Ion Muslea
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-60566-010-3.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Inductive learning algorithms typically use a set of labeled examples to learn class descriptions for a set of user-specified concepts of interest. In practice, labeling the training examples is a tedious, time consuming, error- prone process. Furthermore, in some applications, the labeling of each example also may be extremely expensive (e.g., it may require running costly laboratory tests). In order to reduce the number of labeled examples that are required for learning the concepts of interest, researchers proposed a variety of methods, such as active learning, semi-supervised learning, and meta-learning. This article presents recent advances in reducing the need for labeled data in multi-view learning tasks; that is, in domains in which there are several disjoint subsets of features (views), each of which is sufficient to learn the target concepts. For instance, as described in Blum and Mitchell (1998), one can classify segments of televised broadcast based either on the video or on the audio information; or one can classify Web pages based on the words that appear either in the pages or in the hyperlinks pointing to them. In summary, this article focuses on using multiple views for active learning and improving multi-view active learners by using semi-supervised- and meta-learning.
Chapter Preview
Top

Background

Active, Semi-Supervised, and Multi-View Learning

Most of the research on multi-view learning focuses on semi-supervised learning techniques (Collins & Singer, 1999, Pierce & Cardie, 2001) (i.e., learning concepts from a few labeled and many unlabeled examples). By themselves, the unlabeled examples do not provide any direct information about the concepts to be learned. However, as shown by Nigam, et al. (2000) and Raskutti, et al. (2002), their distribution can be used to boost the accuracy of a classifier learned from the few labeled examples.

Intuitively, semi-supervised, multi-view algorithms proceed as follows: first, they use the small labeled training set to learn one classifier in each view; then, they bootstrap the views from each other by augmenting the training set with unlabeled examples on which the other views make high-confidence predictions. Such algorithms improve the classifiers learned from labeled data by also exploiting the implicit’ information provided by the distribution of the unlabeled examples.

In contrast to semi-supervised learning, active learners (Tong & Koller, 2001) typically detect and ask the user to label only the most informative examples in the domain, thus reducing the user’s data-labeling burden. Note that active and semi-supervised learners take different approaches to reducing the need for labeled data; the former explicitly search for a minimal set of labeled examples from which to perfectly learn the target concept, while the latter aim to improve a classifier learned from a (small) set of labeled examples by exploiting some additional unlabeled data.

In keeping with the active learning approach, this article focuses on minimizing the amount of labeled data without sacrificing the accuracy of the learned classifiers. We begin by analyzing co-testing (Muslea, 2002), which is a novel approach to active learning. Co-testing is a multi-view active learner that maximizes the benefits of labeled training data by providing a principled way to detect the most informative examples in a domain, thus allowing the user to label only these.

Then, we discuss two extensions of co-testing that cope with its main limitations—the inability to exploit the unlabeled examples that were not queried and the lack of a criterion for deciding whether a task is appropriate for multi-view learning. To address the former, we present Co-EMT (Muslea et al., 2002a), which interleaves co-testing with a semi-supervised, multi-view learner. This hybrid algorithm combines the benefits of active and semi-supervised learning by detecting the most informative examples, while also exploiting the remaining unlabeled examples. Second, we discuss Adaptive View Validation (Muslea et al., 2002b), which is a meta-learner that uses the experience acquired while solving past learning tasks to predict whether multi-view learning is appropriate for a new, unseen task.

Complete Chapter List

Search this Book:
Reset