A Process-Oriented Framework for Regulating Artificial Intelligence Systems

A Process-Oriented Framework for Regulating Artificial Intelligence Systems

Andrew Stranieri, Zhaohao Sun
DOI: 10.4018/978-1-7998-9016-4.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Frameworks for the regulation of artificial intelligence (AI) systems are emerging; some are based on regulation theories; others are more technologically focused. Regulation of AI systems is likely to emerge in an ad-hoc, unstructured, and uncoordinated fashion that renders high level frameworks philosophically interesting but of limited benefit in practice. In this paper, the task of arriving at a collection of interventions that regulate an AI system is taken to be a process-oriented problem. It presents a process-oriented framework for the design of regulating systems by deliberating groups. It also discusses regulations of AI systems and responsibility, mechanisms and institutions, key elements for regulating AI systems. The proposed approach might facilitate research and development of responsible AI, explainable AI, and ethical AI for an ethical and inclusive digitized society. It also has implications for the development of e-business, e-services, and e-society.
Chapter Preview
Top

Introduction

AI has developed rapidly in the past decade, and the AI revolution has been playing an important role in many sectors of the economy (M. U. Scherer, 2016; Zhaohao Sun & Stranieri, 2021). Although ethical artificial intelligence (AI), responsible AI, and explainable AI have been studied in past decades, they have drawn significant attention recently with the dramatic development of AI in driverless cars, cloud computing, e-business, e-services, and e-society (Bossmann, 2016; Bostrom & Yudkowsky, 2018). For instance, high-profile AI applications in driverless cars have raised concerns about who takes responsibility for misdemeanors or errors made by vehicle autopilots trained with machine learning algorithms(Raviteja, 2020). Software developers who design machine learning systems cannot reasonably be held responsible for poor decisions made by systems that have learned from long-term exposure to traffic environments (Dixit et al., 2016).

Concerns regarding the assignment of blame when electronic health record based systems with embedded AI cause harm have led to calls for these systems to explain their reasoning (Payrovnaziri et al., 2020). However, what constitutes a sufficient explanation is difficult to specify, particularly with AI systems that perform actions based on learning conducted over exposure to large datasets (Atkinson et al., 2020; Doshi-Velez et al., 2017). Further, machine learning algorithms can be expected to learn in increasingly sophisticated ways with access to increasingly large datasets (Z Sun et al., 2018) so challenges involved with assigning responsibility for the actions taken by AI systems can be expected to become increasingly complex and pressing.

The capacity for AI systems to explain their reasoning for assignment of responsibility and blame are one aspect of the broader objective of regulating AI systems. Wirtz et al surveys positions on AI regulation based on the assumption that regulation of AI systems involves more than explainable AI and requires the introduction of frameworks that include legal and other constraints (Wirtz et al., 2020). (Thierer et al., 2017) take a laissez fare public policy stance that encourages AI development to be largely unregulated until or unless obvious harm occurs. (M. Scherer, 2015) recommend the creation of an agency commissioned to enforce compliance of legislation specifically focused on regulating AI system.

(Gasser & Almeida, 2017) presents a three layered model for the governance of AI systems that includes 1) a technology layer with algorithm accountability, 2) standards and data governance and 3) a socio-ethical layer that specifies norms and legislation. (Rahwan, 2018) advocated that two groups of stakeholders immediately involved with an AI technology act as a human in the loop system to understand its workings and regulate the design and implementation. The human in the loop system is further complemented by a society in the loop group comprised of stakeholders with competing interests. This group represents society at large and examines trade-offs and unintended consequences. The weBuildAI framework advanced by (Lee et al., 2019) recommends an algorithm design process where individuals representing diverse interests engage in participatory co-design processes with AI systems developers.

Common themes inherent in approaches to regulate AI systems advanced by (Gasser & Almeida, 2017; Rahwan, 2018) and (Lee et al., 2019) include the notion that the exercise is very difficult and that participants who represent society should be closely involved in the regulation. But, what stakeholders or participants should actually do, and how they should reach decisions aimed at regulating an AI system, is mainly unspecified by framework authors. We contend that participating in regulation discussions are particularly difficult because the problem of regulating an AI system conforms to the ten criteria of “wicked problems”(Rittel & Webber, 1974). The position advanced in this chapter is that a stakeholder group can benefit from a clearly defined process to guide their reasoning. A process that extends the the one described by (Stranieri & Sun, 2021) is presented here.

A process for scaffolding the design of a regulatory system to govern an AI technology is intended to be used by an individual or group charged with defining interventions that, in their totality will regulate the AI system. Each step in the process can be regarded as a problem that satisfies the conditions for a wicked problem so regulatory solutions can be expected to be found and refined over time by deliberating groups rather than by advancing pre-specified frameworks or algorithms. Based on the above discussion, the research issues are below:

Key Terms in this Chapter

Machine Learning: Is concerned with how computers can adapt to new circumstances and detect and extrapolate patterns and knowledge.

Stare Decisis: The requirement is that a decision-maker in a court makes decisions consistent with his or her past decisions, decisions of courts at the same level and those of higher courts.

Process-Oriented Paradigm: A set of ideas and actions intended to deal with a problem by developing a model consisting of process steps, procedures, and tasks.

Artificial Intelligence (AI): Is science and technology concerned with imitating, extending, augmenting, and automating the intelligent behaviors of human beings.

Regulating AI Systems: Is about making AI systems safe, accountable, liable, and responsible complying with the global responsibility for the common human being’s sustainable development.

Ethical Artificial Intelligence: Is a branch of the ethics of technology-specific to artificial intelligence and related artificially intelligent systems. It is about concern with the moral behaviors of humans as they design, make, use, and treat artificial intelligence and related artificially intelligent systems, and a concern with the moral behaviors of AI-based machines, or AI machine ethics.

Data Mining: Is a process of discovering various models, summaries, and derived values, knowledge from a large database. Another definition is that it is the process of using statistical, mathematical, logical, AI methods and tools to extract useful information from large databases.

Intelligent System: Is a system that can imitate, automate, and augment some intelligent behaviors of human beings. Expert systems and knowledge-based systems are examples of intelligent systems.

Complete Chapter List

Search this Book:
Reset