Motivated Learning for Computational Intelligence

Motivated Learning for Computational Intelligence

Janusz A. Starzyk
Copyright: © 2012 |Pages: 27
DOI: 10.4018/978-1-60960-818-7.ch202
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter describes a motivated learning (ML) method that advances model building and learning techniques required for intelligent systems. Motivated learning addresses critical limitations of reinforcement learning (RL), the more common approach to coordinating a machine’s interaction with an unknown environment. RL maximizes the external reward by approximating multidimensional value functions; however, it does not work well in dynamically changing environments. The ML method overcomes RL problems by triggering internal motivations, and creating abstract goals and internal reward systems to stimulate learning. The chapter addresses the important question of how to motivate an agent to learn and enhance its own complexity? A mechanism is presented that extends low-level sensory-motor interactions towards advanced perception and motor skills, resulting in the emergence of desired cognitive properties. ML is compared to RL using a rapidly changing environment in which the agent needs to manage its motivations as well as choose and implement goals in order to succeed.
Chapter Preview
Top

Introduction

While we still do not know the mechanisms needed to build them, the design of intelligent machines is likely to revolutionize the way we live. Researchers around the world work to solve this highly challenging task. Artificial neural networks (ANN) modeled on networks of biological neurons are successfully used for classification, function approximation, and control. Yet a classical ANN learns only a single task for which it is trained, requires extensive training effort and close supervision during learning. The reinforcement learning (RL) method stimulates development of learning through interaction with the environment; however, state-based value learning that is in the core of any implementation of RL, is typically useful for simple systems with a small number of states working in slowly changing environments. Learning effort and computational cost increases significantly with the environmental complexity, so that optimal decision making in a complex environment is still intractable by means of reinforcement learning.

The overall goal of this chapter is to address the key issues facing development of cognitive agents that interact with a dynamically changing environment. In such an environment, typical reinforcement learning works poorly as the approximated value function changes, thus more extensive training does not translate into more successful operation. The main purpose of this chapter is to introduce a learning strategy that recognizes the environment’s complexity, and captures it in the network of interdependent motivations, goals, and values that the machine learns while interacting with the hostile environment. The method is inspired by human learning, in which the external reward is not the only motivation to succeed, and actions taken are not just to maximize this reward, but lead to a deeper understanding of complex relations between various objects and concepts in the environment.

The method described in this chapter is known as motivated learning (ML), where internal motivations, created either by the external reward or other motivations, may dominate over the externally set goals (and rewards). In reinforcement learning, the machine does not always try to maximize its reward and sometimes performs random moves. This abandonment of the optimum policy is a part of its learning strategy. The random moves are used to explore the environment to perhaps improve its value system, but as the learning progresses, the machine follows the optimum policy more often, trying to maximize the total reward received. In motivated learning this abandonment of the optimum policy that maximizes the external reward is deliberate and is driven by the need to satisfy internally set objectives. In the process, the machine learns new perceptions, improves sensory-motor coordination, and discovers complex relations that exist in the environment. By relating its actions to changes they cause in the environment, a ML machine builds complex motivations and a system of internal rewards that help it to operate in this environment.

Complete Chapter List

Search this Book:
Reset