Reinforcement Learning for Combinatorial Optimization

Reinforcement Learning for Combinatorial Optimization

Di Wang
Copyright: © 2023 |Pages: 15
DOI: 10.4018/978-1-7998-9220-5.ch170
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Combinatorial optimization (CO) problems have many important application domains, including social networks, manufacturing, and transportation. However, as an NP-hard problem, the traditional CO problem-solvers require domain knowledge and hand-crafted heuristics. Facing big data challenges, can we solve these challenging problems with a learning structure within a short time? This article will demonstrate how to solve the combinatorial optimization problems with the deep reinforcement learning (DRL) method. Reinforcement learning (RL) is a subfield of machine learning (ML) that learns the optimal policy over time. Building on Markov decision process, RL has the solid theoretical foundation to obtain the optimal solution. Once parameters of DRL are trained, a new problem case can be solved quickly. Moreover, DRL learns the optimal solution without labels by maximizing the accumulative discounted reward received from the environment. This article will discuss three typical CO problems and present the advantages of DRL over other traditional methods.
Chapter Preview
Top

Introduction

Reinforcement learning (RL) is a subfield of machine learning (ML) that tells what actions the intelligent agent should take to maximize the received cumulative rewards (Arulkumaran, 2017) from environments. Building on Markov Decision Process (MDP), a discrete-time stochastic control process, reinforcement learning is suitable for temporal decision-making problems with long-term and short-term reward trade-offs. Researchers introduce deep learning into reinforcement learning to fully utilize the advanced GPU computation ability and potent representation property of neural networks. Moreover, exploiting deep neural networks (DNN) as function approximators instead of using limited-capacity value memory tables make it possible for DRL to solve large-scale problems. The effectiveness of DRL has been proved with applications across multiple industries, including robot control (Yuan, 2022), computer games (Peng, 2022), natural language processing (Du, 2022), healthcare (Oh, 2022), finance (Asgari, 2022) and others.

Figure 1.

Main concepts and terminologies in deep reinforcement learning. The deep reinforcement learning algorithm contains the state, actions, reward, and stationary transition function. State value function and state-action value function are two approximations of the discounted accumulative reward. Besides, DQN, A2C, A3C, and PPO are state-of-art DRL algorithms, while CNN, RNN, GAT, and GCN are state-of-art neural network architectures.

978-1-7998-9220-5.ch170.f01

As shown in Figure 1, many terminologies and concepts are included in deep reinforcement learning. DRL's four essential constituent parts are states, actions, rewards, and stationary transition functions. Specifically, the state represents the observable environment, which refers to the agent's surroundings or the target optimization problem. Action denotes the behaviors of the agent or decision variables of optimization problems, while policy refers to actions at the sequence of time steps. The stationary transaction function records probabilities of transitioning from one state to another. The reward is the evaluation of the action providing DRL the directions of the training process, because of which, DRL belongs to semi-supervised learning algorithms. Unlike supervised learning algorithms, the ground truth is not provided. Theoretically, DRL updates parameters of neural networks greedily towards the gradient of maximizing the expectation of rewards. In detail, the policy evaluation step and the policy estimation step are the essential steps in the training process. Figure 2 illustrates an example of the cooperation between these two steps. The policy evaluation step iteratively estimates state function v𝜋, where 𝜋 denotes the adopted policy. An accurate evaluation can provide accurate instruction to the policy improvement step. The policy improvement step greedily generates better policies, providing more data for training the policy evaluation step.

Key Terms in this Chapter

Deep Reinforcement Learning: A learning algorithm allows the agent to make decisions by trial and error.

Markov Decision Process: A discrete-time stochastic control process models the decision-making process in environments with randomness.

Loss Function: A function maps values of variables into a real number.

Neural Network: A series of algorithms can extract relationships from data through mimicking the operation way of human brain.

Combinatorial Optimization: A subfield of optimization aims to find an optimal object from a finite set of objects where the set of feasible solutions is discrete or can be reduced to a discrete set.

Gradient: The steepness of the slope at that point is given by the magnitude of the gradient vector.

Deep Learning: A kind of machine learning technique teaches computers to do what comes naturally to humans.

Complete Chapter List

Search this Book:
Reset