Simulating Cooperative Behaviors in Dynamic Networks

Simulating Cooperative Behaviors in Dynamic Networks

Yu Zhang (Trinity University, USA), Jason Leezer (Pervasive Software, USA) and Kenny Wong (Trinity University, USA)
Copyright: © 2010 |Pages: 19
DOI: 10.4018/jats.2010070104
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Agent-based social simulation uses agent systems to study social behaviors and phenomena. A difficulty in producing social simulations lies in the problem of modeling the emergence of social norms. Although empirical evidence has provided insight into how human relationships are organized, the way in which those relationships are used to produce cooperative behavior where each agent only seeks to maximize its own utility is not well defined. This paper proposes a new rule called the Highest Rewarding Neighborhood (HRN) for social interactions. The HRN rule allows agents to remain selfish and be able to break relationships in order to maximize their utility. Our experiment shows that when agents are able to break unrewarding relationships that a Pareto-optimum strategy arises as the social norm. In addition, the authors conclude the rate and amount of Pareto-optimum strategy that arises is dependent on the network structure when the networks are dynamic, and the rate is independent of the network structure when the networks are static.
Article Preview

Introduction

Social simulation is a class of simulations used to study social behavior and phenomena (Axelrod, 1997). One particular type of social simulation is known as agent based social simulation (Davidsson, 2002). Here agents are used to model social entities such as people, groups and towns. One purpose of these models is to study “generative social science” (Epstein, 1999), i.e., how could the decentralized local interactions of autonomous agents generate social norms? This is certainly not an easy task as the entire field is founded in researching this kind of the problems.

This paper studies the emergence of social norms in systems of social agents. In multi-agent simulations, a social norm is defined as a regular behavior that is a solution to a recurrent or continuous social cooperation problem (Dignum, 1999). Social cooperation problems are also known as social dilemmas. These problems are dilemmas because in such settings, everyone benefits from joint cooperation but an individual has the potential to benefit more by defecting. The problem of how cooperative behavior can emerge in such a setting has fueled research in multiple disciplines such as Sociology, Game Theory, Economics and Artificial Intelligence. Existing attempts to ask the question of how social norms emerge have two major disadvantages:

  • In existing models, agents do not learn about their environment; instead they just serve as vessels to spread influence. This gives no insight into the adoption of social norms in selfish autonomous agents.

  • Only static networks are modeled, i.e., connection between agents will never change once the network is generated, which is unrealistic in a real world setting.

We propose a new rule for social interactions. The rule is called Highest Rewarding Neighborhood (HRN). The HRN rule allows agents to learn from the environment and compete in networks. Under the HRN rule, cooperative behavior emerges even though agents are selfish and attempt to only maximize their own utility. This is because agents are able to break unrewarding relationships and therefore are able to maintain mutually beneficial neighborhoods. This leads to a Pareto-optimum social convention.

In order to be rational in a dynamic environment, an agent’s internal representation of their environment must change as they gain more knowledge. This process is known as learning. When an agent must learn optimal behavior about a dynamic environment through trial and error, they employ what is known as reinforcement learning (Kaelbling, 1996). It has been shown that reinforcement learning has the ability to model realistic behavior in problems of cooperation and coordination (Erve & Roth, 1998). The reinforcement learning method that our agents employ is Q-learning (see Section III for details).

In multi-agent simulations, when agents communicate with each other or work together on a common goal, agents are often organized into networks. A network is a set of items called vertices and connections between them called edges (Newmann, 2003). A network will be called static if edges are never created or removed after the generation of the graph. A dynamic network is one in which the edges are created and removed as the network evolves.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 9: 2 Issues (2017): Forthcoming, Available for Pre-Order
Volume 8: 1 Issue (2016): Forthcoming, Available for Pre-Order
Volume 7: 3 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing