Shopping Cart | Login | Register | Language: English

Decision Theory Models for Applications in Artificial Intelligence: Concepts and Solutions

Release Date: October, 2011. Copyright © 2012. 444 pages.
Select a Format:
Hardcover
$180.00
In Stock. Have it as soon as Aug. 5 with express shipping*.
DOI: 10.4018/978-1-60960-165-2, ISBN13: 9781609601652, ISBN10: 1609601653, EISBN13: 9781609601676
Cite Book

MLA

Sucar, L. Enrique, Eduardo F. Morales and Jesse Hoey. "Decision Theory Models for Applications in Artificial Intelligence: Concepts and Solutions." IGI Global, 2012. 1-444. Web. 31 Jul. 2014. doi:10.4018/978-1-60960-165-2

APA

Sucar, L. E., Morales, E. F., & Hoey, J. (2012). Decision Theory Models for Applications in Artificial Intelligence: Concepts and Solutions (pp. 1-444). Hershey, PA: IGI Global. doi:10.4018/978-1-60960-165-2

Chicago

Sucar, L. Enrique, Eduardo F. Morales and Jesse Hoey. "Decision Theory Models for Applications in Artificial Intelligence: Concepts and Solutions." 1-444 (2012), accessed July 31, 2014. doi:10.4018/978-1-60960-165-2

Export Reference

Mendeley
Favorite
Decision Theory Models for Applications in Artificial Intelligence: Concepts and Solutions
Access on Platform
Browse by Subject
Top

Description

One of the goals of artificial intelligence (AI) is creating autonomous agents that must make decisions based on uncertain and incomplete information. The goal is to design rational agents that must take the best action given the information available and their goals.

Decision Theory Models for Applications in Artificial Intelligence: Concepts and Solutions provides an introduction to different types of decision theory techniques, including MDPs, POMDPs, Influence Diagrams, and Reinforcement Learning, and illustrates their application in artificial intelligence. This book provides insights into the advantages and challenges of using decision theory models for developing intelligent systems.

Top

Table of Contents and List of Contributors

Search this Book: Reset
Table of Contents
Foreword
Finn V. Jensen
Chapter 1
Introduction  (pages 1-8)
L. Enrique Sucar, Eduardo Morales, Jesse Hoey
This chapter gives a general introduction to decision-theoretic models in artificial intelligence and an overview of the rest of the book. It starts... Sample PDF
Introduction
$37.50
Chapter 2
Luis Enrique Sucar
In this chapter we will cover the fundamentals of probabilistic graphical models, in particular Bayesian networks and influence diagrams, which are... Sample PDF
Introduction to Bayesian Networks and Influence Diagrams
$37.50
Chapter 3
Pascal Poupart
The goal of this chapter is to provide an introduction to Markov decision processes as a framework for sequential decision making under uncertainty.... Sample PDF
An Introduction to Fully and Partially Observable Markov Decision Processes
$37.50
Chapter 4
Eduardo F. Morales, Julio H. Zaragoza
This chapter provides a concise introduction to Reinforcement Learning (RL) from a machine learning perspective. It provides the required background... Sample PDF
An Introduction to Reinforcement Learning
$37.50
Chapter 5
Matthew Hoffman, Nando de Freitas
Semi-Markov decision processes are used to formulate many control problems and also play a key role in hierarchical reinforcement learning. In this... Sample PDF
Inference Strategies for Solving Semi-Markov Decision Processes
$37.50
Chapter 6
Boris Defourny, Damien Ernst, Louis Wehenkel
In this chapter, we present the multistage stochastic programming framework for sequential decision making under uncertainty. We discuss its... Sample PDF
Multistage Stochastic Programming: A Scenario Tree Based Approach to Planning under Uncertainty
$37.50
Chapter 7
Omar Zia Khan, Pascal Poupart, James P. Black
Explaining policies of Markov Decision Processes (MDPs) is complicated due to their probabilistic and sequential nature. We present a technique to... Sample PDF
Automatically Generated Explanations for Markov Decision Processes
$37.50
Chapter 8
Dynamic LIMIDS  (pages 164-189)
Francisco J. Díez, Marcel A. J. van Gerven
One of the objectives of artificial intelligence is to build decision-support models for systems that evolve over time and include several types of... Sample PDF
Dynamic LIMIDS
$37.50
Chapter 9
Eduardo F. Morales, Julio H. Zaragoza
This chapter introduces an approach for reinforcement learning based on a relational representation that: (i) can be applied over large search... Sample PDF
Relational Representations and Traces for Efficient Reinforcement Learning
$37.50
Chapter 10
Kasia Muldner, Cristina Conati
We describe a decision-theoretic tutor that helps students learn from Analogical Problem Solving (APS), i.e., from problem-solving activities that... Sample PDF
A Decision-Theoretic Tutor for Analogical Problem Solving
$37.50
Chapter 11
Julieta Noguez, Karla Muñoz, Luis Neri, Víctor Robledo-Rella, Gerardo Aguilar
Active learning simulators (ALSs) allow students to practice and carry out experiments in a safe environment – anytime, anywhere. Well-designed... Sample PDF
Dynamic Decision Networks Applications in Active Learning Simulators
$37.50
Chapter 12
Alberto Reyes, Francisco Elizalde
In this chapter we present AsistO, a simulation-based intelligent assistant for power plant operators that provides on-line guidance in the form of... Sample PDF
An Intelligent Assistant for Power Plant Operation and Training Based on Decision-Theoretic Planning
$37.50
Chapter 13
Jesse Hoey, Pascal Poupart, Craig Boutilier, Alex Mihailidis
This chapter presents a general decision theoretic model of interactions between users and cognitive assistive technologies for various tasks of... Sample PDF
POMDP Models for Assistive Technology
$37.50
Chapter 14
Jason D. Williams
Spoken dialog systems present a classic example of planning under uncertainty. Speech recognition errors are ubiquitous and impossible to detect... Sample PDF
A Case Study of Applying Decision Theory in the Real World: POMDPs and Spoken Dialog Systems
$37.50
Chapter 15
Elva Corona, L. Enrique Sucar
Markov Decision Processes (MDPs) provide a principled framework for planing under uncertainty. However, in general they assume a single action per... Sample PDF
Task Coordination for Service Robots Based on Multiple Markov Decision Processes
$37.50
Chapter 16
Aurélie Beynier, Abdel-Illah Mouaddib
In this chapter, we introduce problematics related to the decentralized control of multi-robot systems. We first describe some applicative domains... Sample PDF
Applications of DEC-MDPs in Multi-Robot Systems
$37.50
Top

Reviews and Testimonials

Several excellent textbooks cover Bayesian networks. Focus in these books is belief updating in Bayesian networks and learning of models. However, I have for many years been missing a graduate textbook, which systematically introduces the concepts and techniques of graphical models for sequential decision making. This book serves this purpose. Not only does it introduce the basic theory and concepts, but it also contains sections indicating new research directions as well as examples of real world decision models. [...] For the domain expert wanting to exploit graphical decision models for constructing a specific decision support system, this book is a useful hand book of the theory as well as of ideas, which may help establishing appropriate models. To the young researcher: this book will give you a firm ground for working with graphical decision models. Read the book, and you will realize that the story is not over. There are lots of challenges waiting for you, and the book provides you an excellent starting point for an exciting journey into the science of graphical decision models.

– Professor Finn V. Jensen, Aalborg University, Denmark
Top

Topics Covered

  • Active learning simulators
  • Bayesian networks and influence diagrams
  • Decision theoretic models for health in the home
  • Dynamic decision networks applications
  • Fully and partially observable Markov decision processes
  • Intelligent assistants for power plant operations and training
  • Multistage stochastic programming
  • Reinforcement Learning
  • Strategies for solving semi-Markov decision processes
  • Task coordination for service robots
Top

Preface

Under the rational agent approach, the goal of artificial intelligence is to design rational agents that must take the best actions given the information available, their prior knowledge, and their goals. In many cases, the information and knowledge is incomplete or unreliable, and the results of their decisions are not certain, that is, they have to make decisions under uncertainty. An intelligent agent should try to make the best decisions based on limited information and limited resources. Decision Theory provides a normative framework for decision making under uncertainty. It is based on the concept of rationality, that is that an agent should try to maximize its utility or minimize its costs. Decision theory was initially developed in economics and operations research, but in recent years has attracted the attention of artificial intelligence researchers interested in understanding and building intelligent agents.

Decision Theory Models for Applications in Artificial Intelligence: Concepts and Solutions provides an introduction to different types of decision theory techniques, and illustrates their application in artificial intelligence. This book provides insights into the advantages and challenges of using decision theory models for developing intelligent systems. It includes a general and comprehensive overview of decision theoretic models in artificial intelligence, with a review of the basic solution techniques, a sample of more advanced approaches, and examples of some recent real world applications.

The book is divided into three parts: Fundamentals, Concepts, and Solutions. Section 1 provides a general introduction to the main decision-theoretic techniques used in artificial intelligence: Influence Diagrams, Markov Decision Processes and Reinforcement Learning. It also reviews the bases of probability and decision theory, and provides a general overview of probabilistic graphical models. Section 2 presents recent theoretical developments that extend some of the techniques in Section 1 to deal with computational and representational issues that arise in artificial intelligence. Section 3 describes a wide sample of applications of decision-theoretic models in different areas of artificial intelligence, including: intelligent tutors and intelligent assistants, power plant control, medical assistive technologies, spoken dialog systems, service robots, and multi-robot systems.

This book is intended for students, researchers and practitioners of artificial intelligence that are interested in decision-theoretic approaches for building intelligent systems. Besides providing a comprehensive introduction to the theoretical bases and computational techniques, it includes recent developments in representation, inference and learning. It is also useful for engineers and other professionals in different fields interested in understanding and applying decision-theoretic techniques to solve practical problems in education, health, robotics, industry and communications, among other application areas. The book includes a wide sample of applications that illustrate the advantages as well as the challenges involved in applying these techniques in different domains.
Top

Author(s)/Editor(s) Biography

Enrique Sucar has a Ph.D in computing from Imperial College, London; a M.Sc. in electrical engineering from Stanford University; and a B.Sc. in electronics and communications engineering from ITESM, Monterrey, Mexico. He has been a Researcher at the Electrical Research Institute and Professor at ITESM Cuernavaca, and is currently a Senior Researcher at INAOE, Puebla, Mexico. He has more than 100 publications in journals and conference proceedings, and has directed 16 Ph.D. thesis. Dr. Sucar is Member of the National Research System, the Mexican Science Academy, and Senior Member of the IEEE. He has served as president of the Mexican AI Society, has been member of the Advisory Board of IJCAI, and is Associate Editor of the journals Computación y Sistemas and Revista Iberoamericana de Inteligencia Artificial. His main research interest are in graphical models and probabilistic reasoning, and their applications in computer vision, robotics and biomedicine.
Eduardo Morales, Ph.D., is a research scientist since 2006 of the National Institute of Astrophysics, Optics and Electronics (INAOE) in Mexico where he conducts research in Machine Learning and Robotics. He has a B.Sc. degree (1974) in Physics Engineering from Universidad Autonoma Metropolitana (Mexico), an M.Sc. degree (1985) in Information Technology: Knowledge-Based Systems from the University of Edinburgh (U.K.), and a PhD degree (1992) in Computer Science from the Turing Institute - University of Strathclyde (U.K.). He has been responsible for 20 research projects sponsored by different funding agencies and private companies and has more than 100 articles in journals and conference proceedings.
Jesse Hoey is an assistant professor in the David R. Cheriton School of Computer Science at the University of Waterloo. Hoey is also an adjunct scientist at the Toronto Rehabilitation Institute in Toronto, Canada. His research focuses on planning and acting in large scale real-world uncertain domains. He has worked extensively on systems to assist persons with cognitive and physical disabilities. He won the Best Paper award at the International Conference on Vision Systems (ICVS) in 2007 for his paper describing an assistive system for persons with dementia during hand washing. Hoey won a Microsoft/AAAI Distinguished Contribution Award at the 2009 IJCAI Workshop on Intelligent Systems for Assisted Cognition, for his paper on technology to facilitate creative expression in persons with dementia. He also works on devices for ambient assistance in the kitchen, on stroke rehabilitation devices, and on spoken dialogue assistance systems. Hoey was co-Chair of the 2008 Medical Image Understanding and Analysis (MIUA) conference and he is Program Chair for the British Machine Vision Conference (BMVC) in 2011.