Interactive Reinforcement Learning with Human
In field of robot learning (Kaplan 2002), interactive reinforcement learning method, reward function denoting goal, is given interactively and has worked to establish the communication between a human and the pet robot AIBO. The main feature of this method is the interactive reward function setup, which was a fixed and built-in function in the main feature of previous reinforcement learning methods. So the user can sophisticate reinforcement learner’s behavior sequences incrementally.
Shaping (Konidaris 2006; Ng 1999) is the theoretical framework of such interactive reinforcement learning methods. Shaping is to accelerate the learning of complex behavior sequences. It guides learning to the main goal by adding shaping reward functions as subgoals. Previous shaping methods (Marthi 2007; Ng 1999) have three assumptions on reward functions as following:
- •
Main goal is given or known for the designer.
- •
Subgoals are assumed as shaping rewards those are generated by potential function to the main goal (Marthi 2007).
- •
Shaping rewards are policy invariant, it means not affecting the optimal policy of the main goal (Ng 1999).
However, these assumptions will not be true on interactive reinforcement learning with an end-user. Main reason is that it is not easy to keep these assumptions while the end-user gives rewards for the reinforcement learning agent. It is that the reward function may not be fixed for the learner if an end-user changes his/her mind or his/her preference. However, most of previous reinforcement learning methods assumes that the reward function is fixed and the optimal solution is unique, so they will be useless in interactive reinforcement learning with an end-user.
Table 1 shows the characteristics on interactive reinforcement learning. In reinforcement learning, an optimal solution is decided by the reward function and the optimality criteria. In standard reinforcement learning, an optimal solution is fixed since both the reward function and the optimality criteria are fixed. On the other hand, in interactive reinforcement learning, an optimal solution may change according to the interactive reward function. Furthermore, in interactive reinforcement learning with human, various optimal solutions will occur since the optimality criteria depend on human's preference.
Table 1. Characteristics on interactive reinforcement learning
Type of reinforcement learning | an optimal solution | reward function | optimality criteria |
standard | fixed | fixed | fixed |
interactive | may change | interactive | fixed |
interactive with human | various optimal | may change | human's preference |
Then the objective of this research is to recommend preferable solutions of each user. The main problem is “how to guide to estimate the user’s preference?”. Our solution consists of two ideas. One is to prepare various solutions by every-visit-optimality (Satoh 2006), another is the coarse to fine recommendation strategy (Yamaguchi 2008).