Norms of Behaviour and Their Identification and Verification in Open Multi-Agent Societies

Norms of Behaviour and Their Identification and Verification in Open Multi-Agent Societies

Wagdi Alrawagfeh (Memorial University of Newfoundland, Canada), Edward Brown (Memorial University of Newfoundland, Canada) and Manrique Mata-Montero (Memorial University of Newfoundland, Canada)
Copyright: © 2012 |Pages: 17
DOI: 10.4018/978-1-4666-1565-6.ch009
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Norms have an obvious role in the coordinating and predicting behaviours in societies of software agents. Most researchers assume that agents already know the norms of their societies beforehand at design time. Others assume that norms are assigned by a leader or a legislator. Some researchers take into account the acquisition of societies’ norms through inference. Their works apply to closed multi-agent societies in which the agents have identical (or similar) internal architecture for representing norms. This paper addresses three things: 1) the idea of a Verification Component that was previously used to verify candidate norms in multi-agent societies, 2) a known modification of the Verification Component that makes it applicable in open multi-agent societies, and 3) a modification of the Verification Component, so that agents can dynamically infer the new emerged and abrogated norms in open multi-agent societies. Using the JADE software framework, we build a restaurant interaction scenario as an example (where restaurants usually host heterogeneous agents), and demonstrate how permission and prohibition of behavior can be identified by agents using dynamic norms.
Chapter Preview
Top

Introduction

Research in the multi-agent systems has adopted several concepts from different disciplines. From philosophy, for example, human behaviours are captured based on a model of human Beliefs, Desires and Intentions (Bratman, 1987). Explicitly representing these mental states in agent architectures produces agents whose behaviours are affected by the adopted Belief, Desire and Intentions. Another example is the concept of norms from social science, which govern the behaviours of the society’s individuals. Accordingly, researchers in multi-agent systems have adopted the concept of norms to imitate the collective behaviours of human society. Creating software agents that are aware of norms helps to regulate or predict the behavior of the agents, and also to make the development and coordination of multi-agent systems easier to manage (Dastani et al., 2008; Dignum, 1999). A system of autonomous agents regulated by norms is called a normative multi-agent system (Lopez, 2003). In (Boella et al., 2008), researchers provide a more elaborate definition of a normative multi-agent systems as “a multi-agent system organized by means of mechanisms to represent, communicate, distribute, detect, create, modify, and enforce norms, and mechanisms to deliberate about norms and detect norm violation and fulfillment”. Normative systems models (including the concepts of permission, obligation and prohibition), combined with multi-agent systems, suggest a promising model for human and artificial agent societies (Boella et al., 2010).

Some researchers have concentrated on how to use norms to regulate and predict agents’ behaviours and how to enforce agents compliance with the norms without limiting agents’ autonomy. For this purpose, two approaches have been used, regimentation and enforcement (Grossi et al., 2007). In case of regimentation, agents have no ability to violate norms; the norms are considered to be a hard constraint. Obviously, the regimentation approach decreases agents’ autonomy drastically. In the enforcement approach, agents decide to comply with or violate a norm, which has consequences determined by the enforcement regime. This decision might be based on the agents’ beliefs, intentions, obligations, desires or something else. This choice is related to the concept of autonomy in artificial agents. When an agent violates a norm, a sanction might be applied to the violator agent as a means of enforcement, and this sanction (or punishment) plays a role in encouraging agents to comply with norms. Similarly, a reward might be granted to an agent that respects a norm. The objective for such enforcement is commonly to produce stable and predictable behavior of the overall system (Castelfranchi, 2004).

Mechanisms by which agents can adopt norms have been studied, which differentiate between rational and irrational norm adoption (Elster, 1989). Some researchers proposed several rational strategies for norm adoption (Lopez, 2003; Conte et al., 1999). Others deal with resolving norms conflicts and inconsistency at the time of adopting norms (Kollingbaum & Norman, 2003). This may happen when an agent joins a society and needs to adopt new norms to function within that society. For example, an agent may adopt an obligation that it previously considered prohibited. Norms might be established in several ways: they might be assigned by a legislator (Boman, 1999), might emerge or be negotiated among agents (Boella & van der Torre, 2007), or be acquired by machine learning techniques that are based on previous experience (Koeppen & Lopez-Sanchez, 2010).

Taking into consideration that the norms in a society are subject to change or even disappear, agents need a mechanism to identify/infer these changes. Also, when an agent joins a new society, it may use a similar mechanism to identify/infer the norms of this society for adoption. Norms have a significant role in regulating, controlling and predicting agents’ behaviours. From the perspective of the individual agent, it is important to know which behaviour is acceptable or not acceptable in a particular society. Identifying changing norms helps agents to adapt their plans for achieving their goals (Savarimuthu et al., 2010).

Complete Chapter List

Search this Book:
Reset