Transfer Learning

Transfer Learning

Lisa Torrey, Jude Shavlik
DOI: 10.4018/978-1-60566-766-9.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Transfer learning is the improvement of learning in a new task through the transfer of knowledge from a related task that has already been learned. While most machine learning algorithms are designed to address single tasks, the development of algorithms that facilitate transfer learning is a topic of ongoing interest in the machine-learning community. This chapter provides an introduction to the goals, settings, and challenges of transfer learning. It surveys current research in this area, giving an overview of the state of the art and outlining the open problems. The survey covers transfer in both inductive learning and reinforcement learning, and discusses the issues of negative transfer and task mapping.
Chapter Preview
Top

Introduction

Human learners appear to have inherent ways to transfer knowledge between tasks. That is, we recognize and apply relevant knowledge from previous learning experiences when we encounter new tasks. The more related a new task is to our previous experience, the more easily we can master it.

Common machine learning algorithms, in contrast, traditionally address isolated tasks. Transfer learning attempts to improve on traditional machine learning by transferring knowledge learned in one or more source tasks and using it to improve learning in a related target task (see Figure 1). Techniques that enable knowledge transfer represent progress towards making machine learning as efficient as human learning.

Figure 1.

Transfer learning is machine learning with an additional source of information apart from the standard training data: knowledge from one or more related tasks.

978-1-60566-766-9.ch011.f01

This chapter provides an introduction to the goals, settings, and challenges of transfer learning. It surveys current research in this area, giving an overview of the state of the art and outlining the open problems.

Transfer methods tend to be highly dependent on the machine learning algorithms being used to learn the tasks, and can often simply be considered extensions of those algorithms. Some work in transfer learning is in the context of inductive learning, and involves extending well-known classification and inference algorithms such as neural networks, Bayesian networks, and Markov Logic Networks. Another major area is in the context of reinforcement learning, and involves extending algorithms such as Q-learning and policy search. This chapter surveys these areas separately.

The goal of transfer learning is to improve learning in the target task by leveraging knowledge from the source task. There are three common measures by which transfer might improve learning. First is the initial performance achievable in the target task using only the transferred knowledge, before any further learning is done, compared to the initial performance of an ignorant agent. Second is the amount of time it takes to fully learn the target task given the transferred knowledge compared to the amount of time to learn it from scratch. Third is the final performance level achievable in the target task compared to the final level without transfer. Figure 2 illustrates these three measures.

Figure 2.

Three ways in which transfer might improve learning: a higher performance at the very beginning of learning, a steeper slope in the learning curve, or a higher asymptotic performance.

978-1-60566-766-9.ch011.f02

If a transfer method actually decreases performance, then negative transfer has occurred. One of the major challenges in developing transfer methods is to produce positive transfer between appropriately related tasks while avoiding negative transfer between tasks that are less related. A section of this chapter discusses approaches for avoiding negative transfer.

When an agent applies knowledge from one task in another, it is often necessary to map the characteristics of one task onto those of the other to specify correspondences. In much of the work on transfer learning, a human provides this mapping, but there are some methods for performing the mapping automatically. A section of this chapter discusses automatic task mapping.

We will make a distinction between transfer learning and multi-task learning (Caruana, 1997), in which several tasks are learned simultaneously (see Figure 3). Multi-task learning is clearly closely related to transfer, but it does not involve designated source and target tasks; instead the learning agent receives information about several tasks at once. In contrast, by our definition of transfer learning, the agent knows nothing about a target task (or even that there will be a target task) when it learns a source task. It may be possible to approach a multi-task learning problem with a transfer-learning method, but the reverse is not possible. It is useful to make this distinction because a learning agent in a real-world setting is more likely to encounter transfer scenarios than multi-task scenarios.

Key Terms in this Chapter

Reinforcement Learning: a type of machine learning in which an agent learns, through its own experience, to navigate through an environment, choosing actions in order to maximize the sum of rewards

Target Task: a task in which learning is improved through knowledge transfer from a source task.

Source Task: a task from which knowledge is transferred to a target task.

Inductive Learning: a type of machine learning in which a predictive model is induced from a set of training examples.

Negative Transfer: a decrease in learning performance in a target task due to transfer learning.

Policy: the mechanism by which a reinforcement-learning agent chooses which action to execute next.

Inductive Bias: the set of assumptions that an inductive learner makes about the concept being learned.

Transfer Learning: methods in machine learning that improve learning in a target task by transferring knowledge from one or more related source tasks.

Multi-Task Learning: methods in machine learning for learning multiple tasks simultaneously.

Mapping: a description of the correspondences between properties of source and target tasks in transfer learning or analogical reasoning.

Complete Chapter List

Search this Book:
Reset