Reinforcement Learning for Business Modeling

Reinforcement Learning for Business Modeling

Fernando S. Oliveira
Copyright: © 2014 |Pages: 10
DOI: 10.4018/978-1-4666-5202-6.ch181
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Chapter Preview

Top

The Reinforcement Learning Problem

This section presents the terminology used in models of learning. The first concept presented is the Markov property (Howard, 1971). Let 978-1-4666-5202-6.ch181.m01, 978-1-4666-5202-6.ch181.m02 represent the transition probability from state st to a state 978-1-4666-5202-6.ch181.m03, where st is the state of a dynamic system at time t. If the state has the Markov Property it contains all the information required to determine the transition probabilities, 978-1-4666-5202-6.ch181.m04. A Markovian decision process is characterized by a tuple (policy, reward function, value function, model of the environment).

A policy978-1-4666-5202-6.ch181.m05 is a rule of behavior that transforms states into actions. A reward function978-1-4666-5202-6.ch181.m06defines the utility that an agent receives from choosing an action 978-1-4666-5202-6.ch181.m07 in a certain state i, at a given time t. The ultimate goal of an agent is to maximize the total reward received in the long-run, his Return (Rt). If the horizon is finite with T stages, then Rt = 978-1-4666-5202-6.ch181.m08+ 978-1-4666-5202-6.ch181.m09…+ 978-1-4666-5202-6.ch181.m10. If the time horizon is infinite, the return is the discounted sum (where 978-1-4666-5202-6.ch181.m11 is the discount parameter) of each one of the rewards Rt = 978-1-4666-5202-6.ch181.m12.

Key Terms in this Chapter

Sarsa Algorithm: it is a reinforcement-learning algorithm for the on-line control problem that uses on-policy learning. Sarsa, computes the expected value of a policy taking into account the possibility of explorative behavior (an explorative action may be non-optimal at a certain stage).

Dynamic Programming: it is the mathematical theory of multi-stage decision processes, aiming to compute the optimal policy , given that the environment follows a Markovian process.

Credit-Assignment: it is the process of identifying among the set of actions chosen in an episode the ones which are responsible for the final outcome. And moreover, it is an attempt to identify the best, and worst, decisions chosen during an episode, so that the best decisions are reinforced and the worst penalized.

Q-learning: in this reinforcement learning algorithm the agent evaluates each action (in the possible states of the world) taking into account its possible repercussions in the future behavior of the system. Q-learning is a temporal-differences off-policy algorithm . It is an algorithm for on-line control problems in which values of the states are updated after having observed the outcomes of the actions chosen in a given sequence.

Off-Policy Learning: the value assigned to a given state (or state-action pair) is a function of the immediate reward and of the maximum rewards received in the subsequent states during the episode.

n-Armed Bandit: it is a stochastic problem in which each action has a certain expected value unknown to the player. In order to learn the expected reward associated with each action an agent is required to try non-optimal actions during a few iterations (exploration) instead of repeatedly choosing the action with the highest expected value at a certain time (exploitation).

Reinforcement Learning: it stands, in the context of computational learning, for a family of algorithms aimed at approximating the best policy to play in a certain environment (without building an explicit model of it) by increasing the probability of playing actions that improve the rewards received by the agent.

Temporal-Differences Algorithm: it is a reinforcement-learning algorithm in which the learner uses the different between the expected and actual rewards received in the different states of the world visited in a given episode to revise the expected value of the states (or state-action pairs).

On-Policy Learning: the value of a state (or state-action pair) is strictly based on the rewards gained from executing a given policy (sequence of actions) in a given episode.

Eligibility Trace: it is a parameter used to control the “memory” of the algorithm, associated to a given state, enabling the assignment of more credit, i.e., higher trace, to the states and actions performed closer to the end of the episode.

Complete Chapter List

Search this Book:
Reset