Automatically Generated Explanations for Markov Decision Processes

Automatically Generated Explanations for Markov Decision Processes

Omar Zia Khan, Pascal Poupart, James P. Black
DOI: 10.4018/978-1-60960-165-2.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Explaining policies of Markov Decision Processes (MDPs) is complicated due to their probabilistic and sequential nature. We present a technique to explain policies for factored MDPs by populating a set of domain-independent templates. We also present a mechanism to determine a minimal set of templates that, viewed together, completely justify the policy. These explanations can be generated automatically at run-time with no additional effort required from the MDP designer. We demonstrate this technique using the problems of advising undergraduate students in their course selection and assisting people with dementia in completing the task of handwashing. We also evaluate these automatically generated explanations for course-advising through a user study involving students.
Chapter Preview
Top

Introduction

In many situations, a sequence of decisions must be taken by an individual or system (e.g., course selection by students, inventory management by a store, etc.). However, deciding on a course of action is notoriously difficult when there is uncertainty in the effects of the actions and the objectives are complex. As described earlier, Markov decision processes (MDPs) provide a principled approach for automated planning under uncertainty. While the beauty of an automated approach is that the computational power of machines can be harnessed to optimize difficult sequential decision-making tasks, there are two drawbacks with this approach. First, experts find it difficult to create accurate models or debug inaccurate models. Second, users find it difficult to understand the reasoning behind an automated recommendation. Both of these issues are key bottlenecks to the widespread use of MDPs. Thus, there is a need for explanations that enhance the expert's understanding and user's trust of these recommendations.

It is difficult to design MDPs because real-world MDPs often have hundreds of thousands, if not millions of states. There are no existing tools for experts to examine and/or debug their models. The current design process involves successive iterations of tweaking various parameters to achieve a desirable output. At the end, the experts still cannot verify if the policy is indeed reflecting their requirements accurately. Users also have to trust the policy by treating it as a black box, with no explanations whatsoever regarding the process of computing the recommendation or the confidence of the system in this recommendation. They cannot observe which factors have been considered by the system while making the recommendation. The system also cannot accept suggestions from the user to modify the policy and update the model to reflect this preference. Our approach on providing a minimal set of explanations will help in alleviating some of these problems. But first, we briefly describe the process of designing transition and reward functions that will help us better appreciate the complexity of creating MDPs and hence motivate the need for explanations.

Challenges in Designing Transition Functions

It is not trivial to encode transition functions for a real-world problem, even using a factored transition model. Experts can attempt to handcraft a transition function, but this is likely to be a tedious process. Further, it is even more likely that their encoding will be erroneous, and they will have to undergo a laborious process of finding and rectifying these errors. It is possible to resort to historical data to learn transitions regarding the model. Transition functions can be learnt from data by frequency counting. Frequency counting for learning transitions is equivalent to parameter learning through maximum likelihood. Different issues exist with such an approach. First, the distribution learnt from the historical data may not be similar to the distribution being encountered by the MDP. Second, there may not be enough data to learn distributions. The lack of sufficient data can also result in missing rare events in the learned model. Another approach to learning transition functions is based on Bayesian reinforcement learning. A distribution over all possible transition models is maintained, and the distribution is updated as more evidence is accumulated.

Complete Chapter List

Search this Book:
Reset