Inference Strategies for Solving Semi-Markov Decision Processes

Inference Strategies for Solving Semi-Markov Decision Processes

Matthew Hoffman, Nando de Freitas
DOI: 10.4018/978-1-60960-165-2.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Semi-Markov decision processes are used to formulate many control problems and also play a key role in hierarchical reinforcement learning. In this chapter we show how to translate the decision making problem into a form that can instead be solved by inference and learning techniques. In particular, we will establish a formal connection between planning in semi-Markov decision processes and inference in probabilistic graphical models, then build on this connection to develop an expectation maximization (EM) algorithm for policy optimization in these models.
Chapter Preview
Top

Introduction

Researchers in machine learning have long attempted to join the fields of inference and learning with that of decision making. Influence diagrams, for example, explicitly cast the decision making process as inference in a graphical model Cooper (1988); Shachter (1988). However, while these methods are a straight-forward application of inference techniques they only apply to finite-horizon problems and only learn non-stationary policies.

For goal-directed decision problems, more general techniques such as that of Attias (2003) exist for finding the maximum a posteriori action sequence. (This technique was later extended by Verma & Rao (2006) to compute the maximal probable explanation.) It is crucial to note, however, that these approaches are not optimal in an expected reward sense. Instead, they can be interpreted as maximizing the probability of reaching the goal.

While it is well known in the optimal control literature that there exists a fundamental duality between inference and control for the special case of linear-quadratic Gaussian models (Kalman, 1960), this result does not hold in general. Extending these ideas to more general models has been attempted by locally approximating the optimal solution Toussaint (2009); Todorov & Li (2005).

A key step in realizing general inference-based approaches while still maintaining optimality with respect to expected rewards was originally addressed by Dayan & Hinton (1997) for immediate reward decision problems. In particular this work proposes an expectation maximization (EM) approach to the problem which works by optimizing a lower bound on the expected rewards. This technique was then greatly formalized by Toussaint & Storkey (2006) who extend it to the infinite-horizon case (see also Toussaint et al., 2006). This line of research has since enjoyed substantial success in the field of robotics (Peters & Schaal, 2007; Kober & Peters, 2008; Vijayakumar et al., 2009), where empirical evidence has indicated that these methods can often outperform traditional stochastic planning and control methods as well as more recent policy gradient schemes.

The focus of this chapter is two-fold: to act as an introduction to the “planning as inference” methodology and to show how to extend these techniques to semi-Markov Decision Processes (SMDPs). SMDPs are an extension of the MDP formalism that generalize the notion of time—in particular, by allowing the time-intervals between state transitions to vary stochastically. This allows us to handle tradeoffs not only between actions based on their expected rewards, but also based on the amount of time that each action takes to perform.

SMDPs are interesting problems in their own right, with applications to call admission control and queueing systems (see e.g. Singh et al., 2007; Das et al., 1999). This formalism also serves as a natural platform in robotics for building complex motions from sequences of smaller motion “templates” as evidenced by Neumann et al. (2009). Finally, SMDPs are a crucial building block for hierarchical reinforcement learning methods Ghavamzadeh & Mahadevan (2007); Sutton et al. (1998); Dietterich (2000). While this chapter serves as an introductory text to the paradigm of inference and learning, and its application to SMDPs, we hope that future work in this area will leverage advances in structured inference techniques for hierarchical tasks of this nature.

The first section of this work will describe the basic mixture of MDPs model that we build on while the second section will show how to extend this to the SMDP formalism. We then describe an EM algorithm for solving these problems. Finally, in the last section we apply this approach to a small SMDP example.

Complete Chapter List

Search this Book:
Reset