Search the World's Largest Database of Information Science & Technology Terms & Definitions
InfInfoScipedia LogoScipedia
A Free Service of IGI Global Publishing House
Below please find a list of definitions for the term that
you selected from multiple scholarly research resources.

What is Average Reward

Handbook of Research on New Investigations in Artificial Life, AI, and Machine Learning
The expected received rewards per step when an agent performs state transitions routinely according to a policy. In this research, an average reward of a policy is defined by the inner product of a ROP vector of the policy and a weight vector.
Published in Chapter:
Formalizing Model-Based Multi-Objective Reinforcement Learning With a Reward Occurrence Probability Vector
Tomohiro Yamaguchi (Nara College, National Institute of Technology (KOSEN), Japan), Yuto Kawabuchi (Nara College, National Institute of Technology (KOSEN), Japan), Shota Takahashi (Nara College, National Institute of Technology (KOSEN), Japan), Yoshihiro Ichikawa (Nara College, National Institute of Technology (KOSEN), Japan), and Keiki Takadama (The University of Electro-Communications, Tokyo, Japan)
DOI: 10.4018/978-1-7998-8686-0.ch012
Abstract
The mission of this chapter is to formalize multi-objective reinforcement learning (MORL) problems where there are multiple conflicting objectives with unknown weights. The objective is to collect all Pareto optimal policies in order to adapt them for use in a learner's situation. However, it takes huge learning costs in previous methods, so this chapter proposes the novel model-based MORL method by reward occurrence probability (ROP) with unknown weights. There are three main features. First one is that average reward of a policy is defined by inner product of the ROP vector and a weight vector. Second feature is that it learns the ROP vector in each policy instead of Q-values. Third feature is that Pareto optimal deterministic policies directly form the vertices of a convex hull in the ROP vector space. Therefore, Pareto optimal policies are calculated independently with weights and just one time by Quickhull algorithm. This chapter reports the authors' current work under the stochastic learning environment with up to 12 states, three actions, and three and four reward rules.
Full Text Chapter Download: US $37.50 Add to Cart
More Results
Model-Based Multi-Objective Reinforcement Learning by a Reward Occurrence Probability Vector
The expected received rewards per step when an agent performs state transitions routinely according to a policy.
Full Text Chapter Download: US $37.50 Add to Cart
eContent Pro Discount Banner
InfoSci OnDemandECP Editorial ServicesAGOSR