Reinforcement Learning with Particle Swarm Optimization Policy (PSO-P) in Continuous State and Action Spaces

Reinforcement Learning with Particle Swarm Optimization Policy (PSO-P) in Continuous State and Action Spaces

Daniel Hein (Technische, Universität München, Munich, Germany), Alexander Hentschel (Siemens AG, Munich, Germany), Thomas A. Runkler (Siemens AG, Munich, Germany) and Steffen Udluft (Siemens AG, Munich, Germany)
Copyright: © 2016 |Pages: 20
DOI: 10.4018/IJSIR.2016070102
OnDemand PDF Download:
No Current Special Offers


This article introduces a model-based reinforcement learning (RL) approach for continuous state and action spaces. While most RL methods try to find closed-form policies, the approach taken here employs numerical on-line optimization of control action sequences. First, a general method for reformulating RL problems as optimization tasks is provided. Subsequently, Particle Swarm Optimization (PSO) is applied to search for optimal solutions. This Particle Swarm Optimization Policy (PSO-P) is effective for high dimensional state spaces and does not require a priori assumptions about adequate policy representations. Furthermore, by translating RL problems into optimization tasks, the rich collection of real-world inspired RL benchmarks is made available for benchmarking numerical optimization techniques. The effectiveness of PSO-P is demonstrated on the two standard benchmarks: mountain car and cart pole.
Article Preview


Reinforcement learning (RL) is an area of machine learning inspired by biological learning. Formally, a software agent interacts with a system in discrete time steps. At each time step, the agent observes the system's state IJSIR.2016070102.m01 and applies an action IJSIR.2016070102.m02. Depending on IJSIR.2016070102.m03 and IJSIR.2016070102.m04, the system transitions into a new state and the agent receives a real-valued reward IJSIR.2016070102.m05. The agent's goal is to maximize its expected cumulative reward, called return IJSIR.2016070102.m06. The solution to an RL problem is a policy, i.e. a map that generates an action for any given state.

This article focuses on the most general RL setting with continuous state and action spaces. In this domain, the policy performance often strongly depends on the algorithms for policy generation and the chosen policy representation (Sutton & Barto, 1998). In the authors’ experience, tuning the policy-learning process is generally challenging for industrial RL problems. Specifically, it is hard to assess whether a trained policy has unsatisfactory performance due to inadequate training data, unsuitable policy representation, or an unfitting training algorithm. Determining the best problem-specific RL approach often requires time-intensive trials with different policy configurations and training algorithms. In contrast, it is often significantly easier to train a well-performing system model from observational data, compared to directly learning a policy and assessing its performance.

To bypass the challenges of learning a closed-form RL policy, the authors adapted an approach from model-predictive control (Rawlings & Mayne, 2009; Camacho & Alba, 2007), which employs only a system model. The general idea behind model-predictive control is deceptively simple: given a reliable system model, one can predict the future evolution of the system and determine a control strategy that results in the desired system behavior. However, complex industry systems and plants commonly exhibit nonlinear system dynamics (Schaefer, Schneegass, Sterzing, & Udluft, 2007; Piche, et al., 2000). In such cases, closed-form solutions to the optimal control problem often do not exist or are computationally intractable to find (Findeisen & Allgoewer, 2002; Magni & Scattolini, 2004). Therefore, model-predictive control tasks for nonlinear systems are typically solved by numerical on-line optimization of sequences of control actions (Gruene & Pannek, 2011). Unfortunately, the resulting optimization problems are generally non-convex (Johansen, 2011) and no universal method for tackling nonlinear model-predictive control tasks has been found (Findeisen, Allgoewer, & Biegler, 2007; Rawlings, Tutorial overview of model predictive control, 2000). Moreover, one might argue based on theoretical considerations that such a universal optimization algorithm does not exist (Wolpert & Macready, 1997).

Complete Article List

Search this Journal:
Open Access Articles
Volume 13: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing