Reinforcement Learning with Reward Shaping and Mixed Resolution Function Approximation

Reinforcement Learning with Reward Shaping and Mixed Resolution Function Approximation

Marek Grzes, Daniel Kudenko
Copyright: © 2009 |Pages: 19
DOI: 10.4018/jats.2009040103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

A crucial trade-off is involved in the design process when function approximation is used in reinforcement learning. Ideally the chosen representation should allow representing as close as possible an approximation of the value function. However, the more expressive the representation the more training data is needed because the space of candidate hypotheses is bigger. A less expressive representation has a smaller hypotheses space and a good candidate can be found faster. The core idea of this paper is the use of a mixed resolution function approximation, that is, the use of a less expressive function approximation to provide useful guidance during learning, and the use of a more expressive function approximation to obtain a final result of high quality. A major question is how to combine the two representations. Two approaches are proposed and evaluated empirically.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 9: 1 Issue (2017)
Volume 8: 1 Issue (2016)
Volume 7: 3 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing