Traffic Congestion Management as a Learning Agent Coordination Problem

Traffic Congestion Management as a Learning Agent Coordination Problem

Kagan Tumer (Oregon State University, USA), Zachary T. Welch (Oregon State University, USA) and Adrian Agogino (NASA Ames Research Center, USA)
Copyright: © 2009 |Pages: 19
DOI: 10.4018/978-1-60566-226-8.ch012
OnDemand PDF Download:
$37.50

Abstract

Traffic management problems provide a unique environment to study how multi-agent systems promote desired system level behavior. In particular, they represent a special class of problems where the individual actions of the agents are neither intrinsically “good” nor “bad” for the system. Instead, it is the combinations of actions among agents that lead to desirable or undesirable outcomes. As a consequence, agents need to learn how to coordinate their actions with those of other agents, rather than learn a particular set of “good” actions. In this chapter, the authors focus on problems where there is no communication among the drivers, which puts the burden of coordination on the principled selection of the agent reward functions. They explore the impact of agent reward functions on two types of traffic problems. In the first problem, the authors study how agents learn the best departure times in a daily commuting environment and how following those departure times alleviates congestion. In the second problem, the authors study how agents learn to select desirable lanes to improve traffic flow and minimize delays for all drivers. In both cases, they focus on having an agent select the most suitable action for each driver using reinforcement learning, and explore the impact of different reward functions on system behavior. Their results show that agent rewards that are both aligned with and sensitive to, the system reward lead to significantly better results than purely local or global agent rewards. They conclude this chapter by discussing how changing the way in which the system performance is measured affects the relative performance of these rewards functions, and how agent rewards derived for one setting (timely arrivals) can be modified to meet a new system setting (maximize throughput).
Chapter Preview
Top

1 Introduction

This purpose of this chapter is to quantify how decisions of local agents in a traffic system (e.g., drivers) affect overall traffic patterns. In particular, this chapter explores the system coordination problem of how to configure and update the system so that individual decisions lead to good system level behavior. From a broader perspective, this chapter demonstrates how to measure the alignment between the local agents in a system and the system at large. Because the focus of this chapter is on multiagent coordination and reward analysis, we focus on abstract, mathematical models of traffic rather full fledged simulations. Our main purpose is to demonstrate the impact of reward design and extract the key properties rewards need to have to alleviate congestion in large agent coordination problems, such as traffic.

In this chapter we apply multi-agent learning algorithms to two separate congestion problems. First we investigate how to coordinate the departure times of a set of drivers so that they do not end up producing traffic “spikes” at certain times, both providing delays at those times and causing congestion for future departures. In this problem, different time slots have different desirabilities that reflect user preferences for particular time slots. The system objective is to maximize the overall system’s satisfaction as a weighted average of those desirabilities. In the second problem we investigate lane selection, where a set of drivers need to select different lanes to a destination (Moriarty and Langley, 1998, Pendrith, 2000). In this problem, different lanes have different capacities and the problem is for the agents to minimize the total congestion. Both problems share the same underlying property that agents greedily pursuing the best interests of their own drivers cause traffic to worsen for everyone in the system, including themselves.

Indeed, multi-agent learning algorithms provide a natural approach to addressing congestion problems in traffic and transportation domains (Bazzan et al., 1999, Dresner and Stone, 2004, Klügl et al., 2005). Congestion problems are characterized by having the system performance depend on the number of agents that select a particular action, rather on the intrinsic value of those actions. Examples of such problems include lane/route selection in traffic flow (Kerner and Rehborn, 1996; Nagel, 1997), path selection in data routing (Lazar et al., 1997), and side selection in the minority game (Challet and Zhang, 1998; Jefferies et al., 2002). In those problems, the desirability of lanes, paths or sides depends solely on the number of agents having selected them. Hence, multi-agent approaches that focus on agent coordination are ideally suited for these domains where agent coordination is critical for achieving desirable system behavior.

The approach we present to alleviating congestion in traffic is based on assigning each driver an agent which determines the departure time/lane to select. The agents determine their actions based on a reinforcement learning algorithm (Littman, 1994, Sutton and Barto, 1998, Watkins and Dayan, 1992). In this reinforcement learning paradigm, agents go through a process of where they take actions and receive rewards evaluating the effect of those actions. Based on these rewards the agents try to improve their actions (see Figure 1). The key issue in this approach is to ensure that the agents receive rewards that promote good system level behavior. To that end, it is imperative that the agent rewards: (i) are aligned with the system reward1, ensuring that when agents aim to maximize their own reward they also aim to maximize system reward; and (ii) are sensitive to the actions of the agents, so that the agents can determine the proper actions to select (i.e., they need to limit the impact of other agents in the reward functions of a particular agent).

Figure 1.

Reinforcement Learning for Congestion Management. A set of agents (cars) take actions. The result of each action is rewarded. Agents then modify their policy using reward

Complete Chapter List

Search this Book:
Reset
List of Reviewers
Table of Contents
Preface
Ana Bazzan, Franziska Klügl
Acknowledgment
Ana Bazzan, Franziska Klügl
Chapter 1
Takeshi Takama
This study discusses adaptation effects and congestion in a multi-agent system (MAS) to analyse real transport and traffic problems. Both... Sample PDF
Adaptation and Congestion in a Multi-Agent System to Analyse Empirical Traffic Problems: Concepts and a Case Study of the Road User Charging Scheme at the Upper Derwent
$37.50
Chapter 2
Qi Han, Theo Arentze, Harry Timmermans, Davy Janssens, Geert Wets
Contributing to the recent interest in the dynamics of activity-travel patterns, this chapter discusses a framework of an agent-based modeling... Sample PDF
A Multi-Agent Modeling Approach to Simulate Dynamic Activity-Travel Patterns
$37.50
Chapter 3
Michael Balmer, Marcel Rieser, Konrad Meister, David Charypar, Nicolas Lefebvre, Kai Nagel
Micro-simulations for transport planning are becoming increasingly important in traffic simulation, traffic analysis, and traffic forecasting. In... Sample PDF
MATSim-T: Architecture and Simulation Times
$37.50
Chapter 4
Ulf Lotzmann
In this chapter an agent-based traffic simulation approach is presented which sees agents as individual traffic participants moving in an artificial... Sample PDF
TRASS: A Multi-Purpose Agent-Based Simulation Framework for Complex Traffic Simulation Applications
$37.50
Chapter 5
Paulo A.F. Ferreira, Edgar F. Esteves, Rosaldo J.F. Rossetti, Eugénio C. Oliveira
Trading off between realism and too much abstraction is an important issue to address in microscopic traffic simulation. In this chapter the authors... Sample PDF
Applying Situated Agents to Microscopic Traffic Modelling
$37.50
Chapter 6
Andreas Schadschneider, Hubert Klüpfel, Tobias Kretz, Christian Rogsch, Armin Seyfried
Multi-Agent Simulation is a general and powerful framework for understanding and predicting the behaviour of social systems. Here the authors... Sample PDF
Fundamentals of Pedestrian and Evacuation Dynamics
$37.50
Chapter 7
Rex Oleson, D. J. Kaup, Thomas L. Clarke, Linda C. Malone, Ladislau Bölöni
The “Social Potential”, which the authors will refer to as the SP, is the name given to a technique of implementing multi-agent movement in... Sample PDF
"Social Potential" Models for Modeling Traffic and Transportation
$37.50
Chapter 8
Sabine Timpf
In this chapter, the authors present a methodology for simulating human navigation within the context of public, multi-modal transport. They show... Sample PDF
Towards Simulating Cognitive Agents in Public Transport Systems
$37.50
Chapter 9
Kurt Dresner, Peter Stone, Mark Van Middlesworth
Fully autonomous vehicles promise enormous gains in safety, efficiency, and economy for transportation. In previous work, the authors of this... Sample PDF
An Unmanaged Intersection Protocol and Improved Intersection Safety for Autonomous Vehicles
$37.50
Chapter 10
Heiko Schepperle, Klemens Böhm
Current intersection-control systems lack one important feature: They are unaware of the different valuations of reduced waiting time of the... Sample PDF
Valuation-Aware Traffic Control: The Notion and the Issues
$37.50
Chapter 11
Charles Desjardins, Julien Laumônier, Brahim Chaib-draa
This chapter studies the use of agent technology in the domain of vehicle control. More specifically, it illustrates how agents can address the... Sample PDF
Learning Agents for Collaborative Driving
$37.50
Chapter 12
Kagan Tumer, Zachary T. Welch, Adrian Agogino
Traffic management problems provide a unique environment to study how multi-agent systems promote desired system level behavior. In particular, they... Sample PDF
Traffic Congestion Management as a Learning Agent Coordination Problem
$37.50
Chapter 13
Matteo Vasirani, Sascha Ossowski
The problem of advanced intersection control is being discovered as a promising application field for multiagent technology. In this context... Sample PDF
Exploring the Potential of Multiagent Learning for Autonomous Intersection Control
$37.50
Chapter 14
Tomohisa Yamashita, Koichi Kurumatani
With maturation of ubiquitous computing technology, it has become feasible to design new systems to improve our urban life. In this chapter, the... Sample PDF
New Approach to Smooth Traffic Flow with Route Information Sharing
$37.50
Chapter 15
Denise de Oliveira, Ana L.C. Bazzan
In a complex multiagent system, agents may have different partial information about the system’s state and the information held by other agents in... Sample PDF
Multiagent Learning on Traffic Lights Control: Effects of Using Shared Information
$37.50
Chapter 16
Tamás Máhr, F. Jordan Srour, Mathijs de Weerdt, Rob Zuidwijk
While intermodal freight transport has the potential to introduce efficiency to the transport network,this transport method also suffers from... Sample PDF
The Merit of Agents in Freight Transport
$37.50
Chapter 17
Lawrence Henesey, Jan A. Persson
In analyzing freight transportation systems, such as the intermodal transport of containers, often direct monetary costs associated with... Sample PDF
Analyzing Transactions Costs in Transport Corridors Using Multi Agent-Based Simulation
$37.50
Chapter 18
Shawn R. Wolfe, Peter A. Jarvis, Francis Y. Enomoto, Maarten Sierhuis, Bart-Jan van Putten
Today’s air traffic management system is not expected to scale to the projected increase in traffic over the next two decades. Enhancing... Sample PDF
A Multi-Agent Simulation of Collaborative Air Traffic Flow Management
$37.50
About the Contributors