Reference Hub3
Exploitation-Oriented Learning XoL: A New Approach to Machine Learning Based on Trial-and-Error Searches

Exploitation-Oriented Learning XoL: A New Approach to Machine Learning Based on Trial-and-Error Searches

Kazuteru Miyazaki
ISBN13: 9781605668987|ISBN10: 1605668982|ISBN13 Softcover: 9781616923518|EISBN13: 9781605668994
DOI: 10.4018/978-1-60566-898-7.ch015
Cite Chapter Cite Chapter

MLA

Miyazaki, Kazuteru. "Exploitation-Oriented Learning XoL: A New Approach to Machine Learning Based on Trial-and-Error Searches." Multi-Agent Applications with Evolutionary Computation and Biologically Inspired Technologies: Intelligent Techniques for Ubiquity and Optimization, edited by Shu-Heng Chen, et al., IGI Global, 2011, pp. 267-293. https://doi.org/10.4018/978-1-60566-898-7.ch015

APA

Miyazaki, K. (2011). Exploitation-Oriented Learning XoL: A New Approach to Machine Learning Based on Trial-and-Error Searches. In S. Chen, Y. Kambayashi, & H. Sato (Eds.), Multi-Agent Applications with Evolutionary Computation and Biologically Inspired Technologies: Intelligent Techniques for Ubiquity and Optimization (pp. 267-293). IGI Global. https://doi.org/10.4018/978-1-60566-898-7.ch015

Chicago

Miyazaki, Kazuteru. "Exploitation-Oriented Learning XoL: A New Approach to Machine Learning Based on Trial-and-Error Searches." In Multi-Agent Applications with Evolutionary Computation and Biologically Inspired Technologies: Intelligent Techniques for Ubiquity and Optimization, edited by Shu-Heng Chen, Yasushi Kambayashi, and Hiroshi Sato, 267-293. Hershey, PA: IGI Global, 2011. https://doi.org/10.4018/978-1-60566-898-7.ch015

Export Reference

Mendeley
Favorite

Abstract

Exploitation-oriented Learning XoL is a new framework of reinforcement learning. XoL aims to learn a rational policy whose expected reward per an action is larger than zero, and does not require a sophisticated design of the value of a reward signal. In this chapter, as examples of learning systems that belongs in XoL, we introduce the rationality theorem of profit Sharing (PS), the rationality theorem of reward sharing in multi-agent PS, and PS-r*. XoL has several features. (1) Though traditional RL systems require appropriate reward and penalty values, XoL only requires an order of importance among them. (2) XoL can learn more quickly since it traces successful experiences very strongly. (3) XoL may be unsuitable for pursuing an optimal policy. The optimal policy can be acquired by the multi-start method that needs to reset all memories to get a better policy. (4) XoL is effective on the classes beyond MDPs, since it is a Bellman-free method that does not depend on DP. We show several numerical examples to confirm these features.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.