Awareness-Based Recommendation by Passively Interactive Learning: Toward a Probabilistic Event

Awareness-Based Recommendation by Passively Interactive Learning: Toward a Probabilistic Event

Tomohiro Yamaguchi, Takuma Nishimura, Shota Nagahama, Keiki Takadama
Copyright: © 2019 |Pages: 29
DOI: 10.4018/978-1-5225-5276-5.ch009
(Individual Chapters)
No Current Special Offers


In artificial intelligence and robotics, one of the important issues is to design human interface. There are two issues: One is the machine-centered interaction design. Another one is the human-centered interaction design. This research aims at the latter issue. This chapter presents the interactive learning system to assist positive change in the preference of a human toward the true preference. Then evaluation of the awareness effect is discussed. The system behaves passively to reflect the human intelligence by visualizing the traces of his/her behaviors. Experimental results showed that subjects are divided into two groups, heavy users and light users, and that there are different effects between them under the same visualizing condition. They also showed that the authors' system improves the efficiency for deciding the most preferred plan for both heavy users and light users. As future research directions, a probabilistic event and its basic recommendation way are discussed.
Chapter Preview


Interactive Reinforcement Learning with Human

A long term goal of interactive learning system is to incorporate human to solve complex tasks. Reinforcement learning is the Standard behavior learning method for among robot, animal and human. In interactive reinforcement learning, there are two roles, a learner and a trainer. The input of a reinforcement learner as a learning goal is called a reward, and the output of the learner as a learning result is called a policy. For example, as training a dog by a human trainer, Peterson (2000, 2001) showed that clicker training is an easy way to shape new behaviors. When a dog performs a new behavior to learn, the trainer clicks the clicker as a positive reward. Pryor (2006) remarks that clicker training is a method for training an animal that uses positive reinforcement in conjunction with a clicker to mark the behavior being reinforced under behavior modification principles.

In current researches of interactive reinforcement learning, there are two approaches to support a learner by giving feedback as, whether a learning goal (reward based), or a learning result (policy based). The former approach is clicker training for the robot, in that a human trainer gives a learning goal to the robot learner. In field of robot learning, Kaplan et al. (2002) showed that interactive reinforcement learning method in that reward function denoting goal is given interactively has worked to establish the communication between a human and the pet robot AIBO. The main feature of this method is the interactive reward function setup which was fixed and build-in function in the main feature of previous reinforcement learning methods. So the user can sophisticate reinforcement learner’s behavior sequences incrementally.

Ng et al. (1999) and Konidaris & Barto (2006) showed that reward shaping is the theoretical framework of such interactive reinforcement learning methods. Shaping is to accelerate the learning of complex behavior sequences. It guides learning to the main goal by adding shaping reward functions as subgoals. Previous reward shaping methods have three assumptions on reward functions as following:

  • Main goal is given or known for the designer;

  • Marthi (2007) remarks that subgoals are assumed as shaping rewards those are generated by potential function to the main goal;

  • Ng et al. (1999) showed that shaping rewards are policy invariant, it means not affecting the optimal policy of the main goal.

However, these assumptions will not be true on interactive reinforcement learning with a non-expert end-user. Main reason is discussed by Griffith et al. (2013) that human feedback signals may be inconsistent with the optimal policy. It is not easy to keep these assumptions while the end-user gives rewards for the reinforcement learning agent. It is that the reward function may not be fixed for the learner if an end-user changes his/her mind or his/her preference. However, most of previous reinforcement learning methods assume that the reward function is fixed and the optimal solution is unique, so they will be useless in interactive reinforcement learning with an end-user.

To avoid this problem, the latter approaches are that a human trainer provides a sample of learning result to the robot learner. For robot learning with human, inverse reinforcement learning proposed by Ng & Russell (2000) is the method that after the human provides demonstrations of an optimal policy, the reward function for the demonstrations is generated to learn the optimal policy. Another approach is called policy shaping proposed by Griffith et al. (2013). Instead of requiring demonstrations, it allows a human trainer to simply critique the learner’s behavior (“that was right/wrong”). Thus the human’s feedback is a label on the optimality of actions of each state.

To introduce our approach, we organize reinforcement learning methods. Table 1 shows the characteristics on interactive reinforcement learning. In reinforcement learning, an optimal solution is decided by the reward function and the optimality criteria. In standard reinforcement learning, an optimal solution is fixed since both the reward function and the optimality criteria are fixed. On the other hand, in interactive reinforcement learning, an optimal solution may change according to the interactive reward function. Furthermore, in interactive reinforcement learning with human, various optimal solutions will occur since the optimality criteria depend on human’s preference.

Table 1.
Characteristics on interactive reinforcement learning
Type of Reinforcement LearningAn Optimal SolutionReward FunctionOptimality Criteria
interactivemay changeinteractivefixed
interactive with humanvarious optimalmay changehuman's preference

Then the objective of this research is to recommend preferable solutions of each user. The main problem is “how to guide to estimate the user’s preference?” Our solution consists of two ideas. One is to prepare various solutions by every-visit-optimality proposed by Satoh & Yamaguchi (2006), another is the coarse to fine recommendation strategy proposed by Yamaguchi, Nishimura & Sato (2011). Our approach considers a human as a novice trainer. First, the novice trainer inputs initial learning goals, then the learning system generates and suggests the candidates of the optimal leaning result to the novice trainer in order to make clear his/her final learning goals.

Key Terms in this Chapter

Additional Probabilistic Event: It is the probabilistic event that affects other events.

Probabilistic Event: It is an event that occurs probabilistically, such as aurora-watching or cherry-blossom viewing.

Model of a User’s Preference Shift: It is defined by two axes, preference reduction and preference extension. Comparing the previous preference set and the current preference set, the common set is the invariant preference, the reduction set is called preference reduction, and the addition set is called preference extension.

Interactive Recommendation Space: In the recommendation space, the user can view and select various plans actively. The recommendation space consists of two dimensions; the preference reduction axis and the preference extension axis, in that, various plans are arranged in a plane.

Interactive Reinforcement Learning with Human: Reinforcement learning method in that reward function denoting goal is given interactively by a human. It is not easy to keep reward function being fixed while the human gives rewards for the reinforcement learning agent. It is that the reward function may not be fixed for the learning algorithm if an end-user changes his/her mind or his/her preference.

Visualizing the User’s Preference Trace: The objective of this visualization is to show the distribution and the degree of the user’s preference to him/herself.

Heavy Users: Users of the interactive recommendation system who decide the most preferred plan after watching almost all plans.

Awareness-Based Recommendation: It is the user-centered recommendation by visualizing both the recommendation space with prepared recommendation plans and the user’s preference trace as the history of the recommendation in it. The recommendation space visualizes the possible preference shift of the user.

Preference Change Problem: It is a problem that the collected user’s profile is not same as the user’s current preference.

Visualizing the Recommendation Space: The objective of this visualization is to inform a user of two kinds of information. First is that the recommendation space consists of two-axes. Second is that in each axis, groups or plans are ordered according to the recommendation order.

Light Users: Users of the interactive recommendation system who do not watch all plans since they stop watching when a preferred plan is found.

Human Adaptive and Friendly: Less active but more intelligent agent is desirable since it does not seem to be officious for the human.

Complete Chapter List

Search this Book: