Search the World's Largest Database of Information Science & Technology Terms & Definitions
InfInfoScipedia LogoScipedia
A Free Service of IGI Global Publishing House
Below please find a list of definitions for the term that
you selected from multiple scholarly research resources.

What is Model-Based Approach

Handbook of Research on New Investigations in Artificial Life, AI, and Machine Learning
The reinforcement learning algorithm which starts with directly estimating the MDP model statistically, then calculates the value of each state as V(s) or the quality of each state action pair Q(s, a) using the estimated MDP to search the optimal solution that maximizes V(s) of each state.
Published in Chapter:
Formalizing Model-Based Multi-Objective Reinforcement Learning With a Reward Occurrence Probability Vector
Tomohiro Yamaguchi (Nara College, National Institute of Technology (KOSEN), Japan), Yuto Kawabuchi (Nara College, National Institute of Technology (KOSEN), Japan), Shota Takahashi (Nara College, National Institute of Technology (KOSEN), Japan), Yoshihiro Ichikawa (Nara College, National Institute of Technology (KOSEN), Japan), and Keiki Takadama (The University of Electro-Communications, Tokyo, Japan)
DOI: 10.4018/978-1-7998-8686-0.ch012
Abstract
The mission of this chapter is to formalize multi-objective reinforcement learning (MORL) problems where there are multiple conflicting objectives with unknown weights. The objective is to collect all Pareto optimal policies in order to adapt them for use in a learner's situation. However, it takes huge learning costs in previous methods, so this chapter proposes the novel model-based MORL method by reward occurrence probability (ROP) with unknown weights. There are three main features. First one is that average reward of a policy is defined by inner product of the ROP vector and a weight vector. Second feature is that it learns the ROP vector in each policy instead of Q-values. Third feature is that Pareto optimal deterministic policies directly form the vertices of a convex hull in the ROP vector space. Therefore, Pareto optimal policies are calculated independently with weights and just one time by Quickhull algorithm. This chapter reports the authors' current work under the stochastic learning environment with up to 12 states, three actions, and three and four reward rules.
Full Text Chapter Download: US $37.50 Add to Cart
More Results
Model-Based Multi-Objective Reinforcement Learning by a Reward Occurrence Probability Vector
The reinforcement learning algorithm which starts with directly estimating the MDP model statistically, then calculates the value of each state as V(s) or the quality of each state action pair Q(s, a) using the estimated MDP to search the optimal solution that maximizes V(s) of each state.
Full Text Chapter Download: US $37.50 Add to Cart
Design of Wearable Computing Systems for Future Industrial Environments
An approach which is based upon the usage of software models in order to develop or specify an application or platform.
Full Text Chapter Download: US $37.50 Add to Cart
eContent Pro Discount Banner
InfoSci OnDemandECP Editorial ServicesAGOSR