Motivated Learning for Computational Intelligence

Motivated Learning for Computational Intelligence

Janusz A. Starzyk
DOI: 10.4018/978-1-60960-551-3.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter describes a motivated learning (ML) method that advances model building and learning techniques required for intelligent systems. Motivated learning addresses critical limitations of reinforcement learning (RL), the more common approach to coordinating a machine’s interaction with an unknown environment. RL maximizes the external reward by approximating multidimensional value functions; however, it does not work well in dynamically changing environments. The ML method overcomes RL problems by triggering internal motivations, and creating abstract goals and internal reward systems to stimulate learning. The chapter addresses the important question of how to motivate an agent to learn and enhance its own complexity? A mechanism is presented that extends low-level sensory-motor interactions towards advanced perception and motor skills, resulting in the emergence of desired cognitive properties. ML is compared to RL using a rapidly changing environment in which the agent needs to manage its motivations as well as choose and implement goals in order to succeed.

Complete Chapter List

Search this Book:
Reset