Learning with Partial Supervision

Learning with Partial Supervision

Abdelhamid Bouchachia (University of Klagenfurt, Austria)
Copyright: © 2009 |Pages: 8
DOI: 10.4018/978-1-60566-010-3.ch179
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Recently the field of machine learning, pattern recognition, and data mining has witnessed a new research stream that is learning with partial supervision -LPS- (known also as semi-supervised learning). This learning scheme is motivated by the fact that the process of acquiring the labeling information of data could be quite costly and sometimes prone to mislabeling. The general spectrum of learning from data is envisioned in Figure 1. As shown, in many situations, the data is neither perfectly nor completely labeled.

LPS aims at using available labeled samples in order to guide the process of building classification and clustering machineries and help boost their accuracy. Basically, LPS is a combination of two learning paradigms: supervised and unsupervised where the former deals exclusively with labeled data and the latter is concerned with unlabeled data. Hence, the following questions:

  • Can we improve supervised learning with unlabeled data? 
  • Can we guide unsupervised learning by incorporating few labeled samples?

Typical LPS applications are medical diagnosis (Bouchachia & Pedrycz, 2006a), facial expression recognition (Cohen et al., 2004), text classification (Nigam et al., 2000), protein classification (Weston et al., 2003), and several natural language processing applications such as word sense disambiguation (Niu et al., 2005), and text chunking (Ando & Zhangz, 2005).

Because LPS is still a young but active research field, it lacks a survey outlining the existing approaches and research trends. In this chapter, we will take a step towards an overview. We will discuss (i) the background of LPS, (iii) the main focus of our LPS research and explain the underlying assumptions behind LPS, and (iv) future directions and challenges of LPS research.
Chapter Preview
Top

Introduction

Recently the field of machine learning, pattern recognition, and data mining has witnessed a new research stream that is learning with partial supervision -LPS- (known also as semi-supervised learning). This learning scheme is motivated by the fact that the process of acquiring the labeling information of data could be quite costly and sometimes prone to mislabeling. The general spectrum of learning from data is envisioned in Figure 1. As shown, in many situations, the data is neither perfectly nor completely labeled.

Figure 1.

Learning from data spectrum

LPS aims at using available labeled samples in order to guide the process of building classification and clustering machineries and help boost their accuracy. Basically, LPS is a combination of two learning paradigms: supervised and unsupervised where the former deals exclusively with labeled data and the latter is concerned with unlabeled data. Hence, the following questions:

  • Can we improve supervised learning with unlabeled data?

  • Can we guide unsupervised learning by incorporating few labeled samples?

Typical LPS applications are medical diagnosis (Bouchachia & Pedrycz, 2006a), facial expression recognition (Cohen et al., 2004), text classification (Nigam et al., 2000), protein classification (Weston et al., 2003), and several natural language processing applications such as word sense disambiguation (Niu et al., 2005), and text chunking (Ando & Zhangz, 2005).

Because LPS is still a young but active research field, it lacks a survey outlining the existing approaches and research trends. In this chapter, we will take a step towards an overview. We will discuss (i) the background of LPS, (iii) the main focus of our LPS research and explain the underlying assumptions behind LPS, and (iv) future directions and challenges of LPS research.

Top

Background

LPS is about devising algorithms that combine labeled and unlabeled data in a symbiotic way in order to boost classification accuracy. The scenario is portrayed in Fig.2 showing that the combination can mainly be done in two ways: active/passive pre-labeling, or via ‘pure’ LPS (Fig. 4). We try to draw a clear picture about these schemes by means of an up-to-date taxonomy of methods.

Figure 2.

Combining labeled and unlabeled data

Figure 4.

Combining labeled and unlabeled data

Active and Passive Pre-Labeling

Pre-labeling aims at assigning a label to unlabeled samples (called queries). These samples are used together with the originally labeled samples to train a fully supervised classifier (Fig. 3). “Passive” pre-labeling means that pre-labeling is done automatically and is referred to as selective sampling or self-training. It has been extensively discussed and consists of first training a classifier before using it to label the unlabeled data (for more details see, Bouchachia, 2007). Various algorithms are used to perform selective sampling, such as multilayer perceptron (Verikas et al., 2001), slef-organizing maps (Dara et al., 2002), and clustering techniques (Bouchachia, 2005a). On the other hand, in active learning, queries are sequentially submitted to an oracle for labeling. Different models have been applied; such as neural networks inversion (Baum, 1991), decision trees (Wiratunga et al., 2003), and query by committee (Freund & Shapire, 1997).

Figure 3.

Pre-labeling approaches

Complete Chapter List

Search this Book:
Reset