The Evolution of Computational Agency

The Evolution of Computational Agency

Srinath Srinivasa (International Institute of Information Technology, Bangalore, India) and Jayati Deshmukh (International Institute of Information Technology, Bangalore, India)
Copyright: © 2020 |Pages: 19
DOI: 10.4018/978-1-7998-2975-1.ch001


Agent-based models have emerged as a promising paradigm for addressing ever increasing complexity of information systems. In its initial days in the 1990s when object-oriented modeling was at its peak, an agent was treated as a special kind of “object” that had a persistent state and its own thread of execution. But since then, agent-based models have diversified enormously to even open new conceptual insights about the nature of systems in general. This chapter presents a perspective on the disparate ways in which our understanding of agency, as well as computational models of agency, have evolved. Advances in hardware like GPUs that brought neural networks back to life may also similarly infuse new life into agent-based models and pave the way for significant advancements in research on artificial general intelligence (AGI).
Chapter Preview

1 Introduction

Today’s information systems are complex, distributed, and need to scale over millions of users and a variety of devices, with guaranteed uptimes. As a result, top-down approaches for systems design and engineering are becoming increasingly infeasible.

Starting sometime in the 1990s, a branch of systems engineering, has approached the problem of systemic complexity in a bottom-up fashion, by designing “autonomous” or “intelligent” agents that can proactively and autonomously act and decide on their own– to address specific, local issues pertaining to their immediate requirements. They also can communicate and coordinate with one another to jointly solve larger problems. The autonomous nature of agents require some form of a rationale that justifies their actions. Given that, object oriented modeling had attracted mainstream attention at that time, the distinction between mechanistic “objects” and autonomous “agents” were often summarized with this slogan (Jennings et al., 1998): Objects do it for free, agents do it for money.

Early research in agent-based systems focused on designing architectures, communication primitives, and knowledge structures for agents’ reasoning. Several such independent research pursuits, also resulted in the emergence of standards organizations like FIPA1, which is now an IEEE standards organization for promoting agent-based modeling and interoperability of its standards with other technologies (Poslad, 2007).

But research interest soon moved from communication and coordination, to address the concept of agency itself. Agents are meant to take decisions “autonomously”– and the term “autonomy” needed sound conceptual and computational foundations. An autonomous agent needs to operate “on its own” and definitions for what this entails, distinguished different models of autonomy. Broadly, approaches to computational modeling of autonomy can be divided into the following research areas: normative, adaptive, quantitative, and autonomic models of agency.

Normative models of agency, interpret agency as a combination of imperatives and discretionary entitlements. They also implement logical frameworks that encode different forms of individual and collective goals (Castelfranchi et al., 1999; Van der Hoek & Wooldridge, 2003; L´opez et al., 2006). Some normative elements for agents include: encoding of their goals, that in turn lead to encoding of their intentions or deliberative plans to achieve their goals, their belief about their environment, their obligations, their prohibitions, and so on. Interacting pairs of normative agents create contracts that regulates their independent actions with respect to the others’ actions. Systems of multiple, normative agents adopt collective deontics or constitutions, that regulate overall behaviour (Andrighetto et al., 2013).

Adaptive frameworks for modeling agency, have emerged from problems where agents have to interact with complex and dynamic environments, like in autonomous driving and robotic navigation. These frameworks can either be model-driven where an underlying model of the environment is learned through interactions; or model-agnostic, where adaptations happen purely from positive or negative reinforcement signals from the environment (Macal & North, 2005; Shoham et al., 2003).

The third paradigm of agency is based on quantitative methods based on decision theory and rational choice theory (Ferber & Weiss, 1999; Parsons & Wooldridge, 2002; Semsar-Kazerooni & Khorasani, 2009). These represent agents with a self-interest function, which then interact with their environment to obtain different kinds of payoffs resulting in a corresponding utility. Rational agents then strive to make decisions in a way that results in utility maximization. Rational choice is represented as pair-wise preference functions between choices, or as numerical payoffs. Interactions between agents are modeled as games representing confounded rationality– where rational choices of one agent may (positively or adversely) affect the prospects for others.

Complete Chapter List

Search this Book: