On Agent Interactions Governed by Morality

On Agent Interactions Governed by Morality

Helder Coelho, António Carlos da Rocha Costa, Paulo Trigo
DOI: 10.4018/978-1-4666-5954-4.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Morality tells agents what they ought to do, and this defines their identity and character. This chapter deals with moral behaviour, following the classical view in Philosophy that defends character as a state concerned with choice, and able to direct the agent decision-taking. The authors also include new values regarding agent moral signature that may enhance the evaluation of agents, namely on reputation and satisfaction. So, the popularity of the agents can be measured with more depth, and not only for organizations but also for social networks.
Chapter Preview
Top

Introduction

Character is like a tree and reputation like its shadow. The shadow is what we think of it; the tree is the real thing (Abraham Lincoln)

Life in society may be studied with agent-based simulation experimentation, decision and game theory (Dehghani et al., 2008). Yet, simulation is epistemologically and ontologically different from empirical research, and it is a purely deduction activity. Also, the classical game theory is not sufficient to explain the fairness or the altruistic conducts of agents with qualitative manners (Dignum et al., 2001), along social interactions. Two other ideas, social preference theory and network structures were introduced later on with good results, but there is still a need to improve the systematic understanding of individual choice in parametric contexts, where an agent is always deliberating independently of the will of other agents (Castelfranchi et al., 2000, 2006; Casrelfranchi, 2014).

The use of game theory (GT) for modelling moral dynamics can be pursued by evolutionary GT, in biological contexts and by algorithmic GT, in social, psychological and economical contexts (Hegselmann, 2009). GT covers strategic interactions among rational players with a focus on preference over outcomes. An associated topic is moral dynamics, usually referred to processes by which moral behaviour and moral attitudes emerge. Attitudes regard the internal side of morality (internalized norms, moral dispositions, accepted values, guiding virtues, and feelings, such as guild, regret and shame). Moral behaviour regards the external side of morality.

In real life, and nowadays, there is a need of differentiation of intentions, decisions and actions between those that are good (right) and those that are bad (wrong). In a world with absence of ethics, agent character is an added value and simulation models are eager to incorporate it in order to promote trust, reputation, responsibility, and honesty. Agent characterization by preference rankings and beliefs is not enough to capture the real thing in society (Briot et al., 2008).

In our opinion, moral, strategy and power are the three key ingredients for fixing the qualities (differences in character), the preferences and the equality relationships of the members of societies (Greene et al., 2002). Therefore, the architecture of the agents is necessary to become explicit, and be crystal clear (far from a simple black box) in social simulation: moral agents are rational negotiators who never forget to choose the best alternative at each moment of choice by constrained maximization. Heuristic machinery is behind the idea of mutually beneficial bargaining and fair principles. Also, morality enables agents to cooperate and coordinate their actions in situations where the pursuit of self-interest prevents this. And the final consequence, morality is a kind of social regulator of choices (“needs as pushing motives”), driven by principles and axioms, and triggered by emotions and feelings to control back and forward information flows under moral thinking (Dimuro et al., 2010).

Game theory has been adopted to understand the function of morality because it helps to construct thought experiments about the diverse conducts of agents. But, it may also be explored to explain, predict, and evaluate all agent behaviour in contexts where the outcome of the action depends on what several agents choose to do (in a dynamical manner) and where their choices depend on what others choose to do (Costa & Dimuro, 2007, 2009).

Along repeated interactions, stable equilibrium of exchanges may be reached by morality, where the behavioural issues of agents are determined by moral and/or social rules (e.g. “do to others as you want to be done by”). So, agents may be provided by an internal commitment capability, some sort of conscience (linked to motives and dispositions): the realization that it is unjust to take a free ride, or that it would be unfair not to do the right thing (Franco, 2008; Adamatti et al., 2009).

Complete Chapter List

Search this Book:
Reset