Bridging Two Realms of Machine Ethics

Bridging Two Realms of Machine Ethics

Luís Moniz Pereira (Universidade Nova de Lisboa, Portugal) and Ari Saptawijaya (Universidade Nova de Lisboa, Portugal & Universitas Indonesia, Indonesia)
Copyright: © 2015 |Pages: 28
DOI: 10.4018/978-1-4666-8592-5.ch010
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

We address problems in machine ethics dealt with using computational techniques. Our research has focused on Computational Logic, particularly Logic Programming, and its appropriateness to model morality, namely moral permissibility, its justification, and the dual-process of moral judgments regarding the realm of the individual. In the collective realm, we, using Evolutionary Game Theory in populations of individuals, have studied norms and morality emergence computationally. These, to start with, are not equipped with much cognitive capability, and simply act from a predetermined set of actions. Our research shows that the introduction of cognitive capabilities, such as intention recognition, commitment, and apology, separately and jointly, reinforce the emergence of cooperation in populations, comparatively to their absence. Bridging such capabilities between the two realms helps understand the emergent ethical behavior of agents in groups, and implements them not just in simulations, but in the world of future robots and their swarms. Evolutionary Anthropology provides teachings.
Chapter Preview
Top

Introduction

Machine ethics (also known as computational morality, machine morality, artificial morality and computational ethics) is a burgeoning field of enquiry that emerges from the need of imbuing autonomous agents with the capacity of moral decision-making. It has particularly attracted interest from the artificial intelligence community and has brought together perspectives from various fields, amongst them: philosophy, cognitive science, neuroscience and primatology. The overall result of this interdisciplinary research is therefore not only important for equipping agents with the capacity of making moral judgments, but also for helping us better understand morality, through the creation and testing of computational models of ethical theories.

Research in artificial intelligence particularly contributes on how techniques from computational logic, machine learning and multi-agent systems, can be employed in order to computationally model, to some improved extent, moral decision-making. In the present chapter we survey problems in machine ethics that have been examined and techniques used in dealing with such problems. Various techniques have been exploited including machine learning, e.g., case-based reasoning, artificial neural networks; and logic-based formalisms, e.g., deontic logic and non-monotonic logics. Our research, in particular, has been focusing on logic programming techniques and their appropriateness to model some morality aspects, namely moral permissibility, its justification, and the dual-process of moral judgments. We argue that the main characteristics of these aspects can be captured by the available ingredients and formalisms based on logic programming. These include, among others, abduction (with integrity constraints), updating, preferences, argumentation, and counterfactual. These ingredients are framed together in an agent life cycle architecture, which allows an agent to make a moral decision by means of abduction (either reactively or deliberatively—the dual-process), respecting its integrity constraints in order to rule out a priori impermissible actions, weighing and preferring decisions after inspecting their consequences, providing arguments to justify moral decisions made, and updating itself either by the changes due to its decisions or by other ethical principles being told or learned. We also touch upon uncertainty and counterfactual reasoning in moral decision-making, and how they fit in our logic programming based agent architecture.

The agent life cycle architecture concerns itself only in realm of the individual, where computation is vehicle for modeling the dynamics of knowledge and moral cognition of an agent. In the collective realm, norms and moral emergence has been studied computationally, using the techniques of Evolutionary Game Theory, in populations of rather simple-minded agents. That is, these agents are not equipped with any cognitive capability, and thus simply act from a predetermined set of actions. Our research has shown that the introduction of cognitive capabilities, such as intention recognition, commitment, and apology, separately and jointly, reinforce the emergence of cooperation in the population, comparatively to the absence of such cognitive abilities. We discuss how modeling moral cognition in individuals (using the aforementioned ingredients of logic programming) within a networked population shall allow them to fine tune game strategies, and in turn may lead to the evolution of high levels of cooperation. Moreover, modeling such capabilities in individuals within a population may help us understand the emergent behavior of ethical agents in groups, in order to implement them not just in a simulation, but also in the real world of future robots and their swarms.

This chapter hence contemplates two distinct realms of machine ethics, to wit, the individual and collective, and identified needed bridges concerning their connection. In studies of human morality, these distinct interconnected realms are evinced too: one stressing above all individual cognition, deliberation, and behavior; the other stressing collective morals, and how they emerged. Of course, the two realms are necessarily intertwined, for cognizant individuals form the populations, and the twain evolved jointly to cohere into collective norms, and into individual interaction.

Presently, machine ethics is becoming an ever more pressing concern, as machines become ever more sophisticated, autonomous, and act in groups, among populations of other machines and of humans. Ethics and jurisprudence, and hence legislation, are however lagging much behind in adumbrating the new ethical issues arising from these circumstances.

Complete Chapter List

Search this Book:
Reset