Search the World's Largest Database of Information Science & Technology Terms & Definitions
InfInfoScipedia LogoScipedia
A Free Service of IGI Global Publishing House
Below please find a list of definitions for the term that
you selected from multiple scholarly research resources.

What is Multi-Objective MDP (MOMDP)

Advanced Robotics and Intelligent Automation in Manufacturing
An MDP in which the reward function describes a vector of n rewards (reward vector), one for each objective, instead of a scalar.
Published in Chapter:
Model-Based Multi-Objective Reinforcement Learning by a Reward Occurrence Probability Vector
Tomohiro Yamaguchi (Nara College, National Institute of Technology (KOSEN), Japan), Shota Nagahama (Nara College, National Institute of Technology (KOSEN), Japan), Yoshihiro Ichikawa (Nara College, National Institute of Technology (KOSEN), Japan), Yoshimichi Honma (Nara College, National Institute of Technology (KOSEN), Japan), and Keiki Takadama (The University of Electro-Communications, Japan)
Copyright: © 2020 |Pages: 27
DOI: 10.4018/978-1-7998-1382-8.ch010
Abstract
This chapter describes solving multi-objective reinforcement learning (MORL) problems where there are multiple conflicting objectives with unknown weights. Previous model-free MORL methods take large number of calculations to collect a Pareto optimal set for each V/Q-value vector. In contrast, model-based MORL can reduce such a calculation cost than model-free MORLs. However, previous model-based MORL method is for only deterministic environments. To solve them, this chapter proposes a novel model-based MORL method by a reward occurrence probability (ROP) vector with unknown weights. The experimental results are reported under the stochastic learning environments with up to 10 states, 3 actions, and 3 reward rules. The experimental results show that the proposed method collects all Pareto optimal policies, and it took about 214 seconds (10 states, 3 actions, 3 rewards) for total learning time. In future research directions, the ways to speed up methods and how to use non-optimal policies are discussed.
Full Text Chapter Download: US $37.50 Add to Cart
More Results
Formalizing Model-Based Multi-Objective Reinforcement Learning With a Reward Occurrence Probability Vector
A MDP in which the reward function describes a vector of n rewards (reward vector), one for each objective, instead of a scalar.
Full Text Chapter Download: US $37.50 Add to Cart
eContent Pro Discount Banner
InfoSci OnDemandECP Editorial ServicesAGOSR