Task Coordination for Service Robots Based on Multiple Markov Decision Processes

Task Coordination for Service Robots Based on Multiple Markov Decision Processes

Elva Corona, L. Enrique Sucar
Copyright: © 2014 |Pages: 18
DOI: 10.4018/978-1-4666-4607-0.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Markov Decision Processes (MDPs) provide a principled framework for planing under uncertainty. However, in general they assume a single action per decision epoch. In service robot applications, multiple tasks are required simultaneously, such as navigation, localization and interaction. We have developed a novel framework based on functional decomposition that divides a complex problem into several sub-problems. Each sub-problem is defined as an MDP and solved independently, and their individual policies are combined to obtain a global policy. In contrast to most previous approaches for hierarchical MDPs, in our approach all the MDPs work in parallel, so we obtain a reactive system based on a decision theoretic framework. We initially solved each MDP independently and combined their policies assuming no conflicts. Then we defined two kinds of conflicts, resource and behavior conflicts, and proposed solutions for both. The first kind of conflict is solved off-line using a two phase process which guarantees a near-optimal global policy. Behavior conflicts are solved on-line based on a set of restrictions specified by the user, and a constraint satisfaction module that selects the action set with higher expected utility. We have used these methods for task coordination in service robots, and present experimental results for a messenger robot.
Chapter Preview
Top

Introduction

Our work is motivated by planning under uncertainty in robotics. Consider a mobile robot that has to perform a complex task in an uncertain environment. To accomplish its goal, the robot has to do several subtasks simultaneously, such as finding the shortest route to certain location, and, at the same time, avoiding obstacles and maintaining its location in the map. It might also need to recognize objects in the environment and interact with people. A popular approach to solve this problem in robotics is based on Brooks’ subsumption architecture (Brooks, 1986), in which several processes can sense and act in parallel. The conflicts that could arise between the different behaviors are usually solved by a fixed priority structure. However, this type of task coordination has several drawbacks:

  • As the number of subtasks increases, defining the priority structure becomes very difficult,

  • The priority is fixed, and cannot change depending on the current situation.

We consider an alternative approach based on decision–theoretic planning, in which the priority of the subtasks can be decided dynamically such that the best actions can be taken at each decision point.

Markov Decision Processes (MDPs) (Bellman, 1957, Puterman, 1994) have developed as a standard method for representing uncertainty in decision-theoretic planning. They are simple for domain experts to specify, or can be learned from data. However, if we represent the robot task coordination problem as a single MDP, we have to consider all possible combinations of all the possible simultaneous actions. This implies an explosion in the action—state space and thus an important increase in complexity for solving the MDP. It also becomes much more difficult to specify or learn the model. Given that each subtask is usually implemented as a separate software module, it is natural to try to view each subtask as a different MDP and then in some way combine their policies to obtain the optimal global policy. Although there has been some work on MDPs with concurrent or parallel actions (Meuleau et al., 1998, Rohanimanesh & Mahadevan, 2008, Younes & Simmons, 2004, Marthi, Russell, Latham & Guestrin, 2005, Little & Thiebaux, 2006, Sucar, 2007, Muausam & Weld, 2008), in general they assume that the subtasks are independent and they do not consider explicitly the conflicts between the policies of each subtask.

We have developed a novel framework for task coordination for service robots based on multiple MDPs, that considers and solves conflicts between the individual policies. Based on functional decomposition, a complex task is partitioned into several subtasks which are solved independently, so that the combined policy can execute concurrent actions. Each subtask is represented as an MDP and solved independently, obtaining an optimal policy. These policies are executed in parallel assuming no conflicts. All the subtasks have a common goal and can share part of the state space, that is represented in a factored form.

We consider that conflicts may arise between the subtasks. We define and solve two types of conflicts:

  • Resource conflicts, and

  • Behavior conflicts.

Resource conflicts occur when two actions require the same physical resource (i.e., to control the wheels of a robot) and cannot be executed concurrently. This type of conflict is solved off–line by a two-phase process. In the first phase we obtained an optimal policy for each subtask (MDP). An initial global policy is obtained by combining the local policies, such that if there is a conflict between the actions selected by each MDP for certain state, the one with maximum value is considered, and the state is marked as a conflict state. This initial solution is improved in a second phase using policy iteration. Taking the previous policy as its initial policy and considering only the states marked as conflicts, with this consideration the time complexity is drastically reduced and a near-optimal global policy is obtained.

Complete Chapter List

Search this Book:
Reset