Only Can AI Understand Me?: Big Data Analytics, Decision Making, and Reasoning

Only Can AI Understand Me?: Big Data Analytics, Decision Making, and Reasoning

Andrew Stranieri, Zhaohao Sun
Copyright: © 2021 |Pages: 21
DOI: 10.4018/978-1-7998-4963-6.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter addresses whether AI can understand me. A framework for regulating AI systems that draws on Strawson's moral philosophy and concepts drawn from jurisprudence and theories on regulation is used. This chapter proposes that, as AI algorithms increasingly draw inferences following repeated exposure to big datasets, they have become more sophisticated and rival human reasoning. Their regulation requires that AI systems have agency and are subject to the rulings of courts. Humans sponsor the AI systems for registration with regulatory agencies. This enables judges to make moral culpability decisions by taking the AI system's explanation into account along with the full social context of the misdemeanor. The proposed approach might facilitate the research and development of intelligent analytics, intelligent big data analytics, multiagent systems, artificial intelligence, and data science.
Chapter Preview
Top

Introduction

For decades since its inception in the 1950’s, the field of artificial intelligence has included active research in the development of models and algorithms where real-world knowledge is represented explicitly. This is known as symbolic reasoning, and contrasts with models and algorithms where knowledge is indirectly learnt by repeated exposure to data or real-world environments, known as sub-symbolic reasoning (Stranieri & Zeleznikow, 2006). Although, hybrid models including abductive case based reasoning (Sun, Finnie, & Weber, 2005a) and argumentation based neural networks (Stranieri, Zeleznikow, Gawler, & Lewis, 1999) proved useful, in recent years, machine learning based sub-symbolic reasoning has risen in prominence to disrupt AI practice (Kaplan, 2016).

High profile applications in driverless cars (Raviteja, 2020) have raised concerns about who takes responsibility for misdemeanors or errors that vehicle autopilots, trained with machine learning, may make. The degree of machine learning is so high in many driverless cars that the software developers who designed the learning system cannot be held responsible for poor decisions made by the systems that have learnt from long term exposure to traffic environments (Dixit, Chand, & Nair, 2016). Drivers who guide the autopilot’s learning similarly ought not be blamed for relying on the autodriving.

Concerns regarding decision making responsibility with reference to automated systems has led to calls, surveyed by (Payrovnaziri et al., 2020) for AI systems to be able to explain their reasonings. The concept of explanation is far from straight forward particularly when AI systems attempt to make decisions in law (Atkinson, Bench-Capon, & Bollegala, 2020). Doshi-Velez, et al, highlights that it is not sufficient for the AI and law programs to arrive at the correct outcome but it must also explain its reasonings properly (Doshi-Velez et al., 2017). Logic based AI and law systems do this by reproducing a trace of the logical statements that were proven during inference chains. Case based reasoning is a kind of experience-based reasoning(Sun & Finnie, 2007). Its principle is that similar problems have similar solutions. In Case-based reasoning, a case is a representation of an experience, which consists of the problem entered in the past and its corresponding solution. The special similarity mentioned here is equality and equivalence. The Case based reasoning systems explain its reasoning by referencing the way in which a case is similar to some past cases and different to others (Ashley, 1991; Sun, Han, & Dong, 2008). Other AI and law programs generate an explanation for an inferred outcome by drawing on knowledge organized in argument schemas. Arrieta, et al, survey explainable artificial intelligence research in recent years to offer a definition (Barredo Arrieta et al., 2020):

Given an audience, an explainable Artificial Intelligence is one that produces details or reasons to make its functioning clear or easy to understand.

In this chapter, we follow (Doshi-Velez et al., 2017) and assume that AI systems will increasingly be required to explain their reasonings and increasingly sophisticated ways for generating explanations will be discovered (Turek, 2020). Secondly, we assume that machine learning algorithms underpinning AI systems will continue to learn more and more sophisticated ways to make decisions or perform actions, particularly as they have increased access to big data (Sun, Strang, & Li, 2018). Our third assumption is that AI agents ought to make morally good decisions and actions (Turek, 2020). Approaches therefore need to be developed to assess the extent to which an AI system has performed, or will perform, appropriate actions for appropriate reasons.

In this chapter, we advance the proposition that a framework that regulates AI systems must involve AI systems that can offer explanations for their actions, but this is not enough. A regulatory framework must also include processes where humans operating through social institutions intervene in the judgement of moral culpability of AI systems. The framework presented in this chapter integrates explainable AI with concepts from jurisprudence and address the following questions:

  • How to use feedback from the AI program (system) to discover errors the AI system has made.

  • What are the key elements of a framework that can be used to assess the extent to which an AI system may have erred?

  • How can decisions regarding appropriate actions to take once an AI system has erred, be made?

Key Terms in this Chapter

Artificial Intelligence (AI): Is science and technology concerned with imitating, extending, augmenting, and automating intelligent behaviors of human beings.

Intelligent System: Is a system that can imitate, automate and augment some intelligent behaviors of human beings. Expert systems and knowledge-based systems are examples of intelligent systems.

Machine Learning: Is concerned about how computers can adapt to new circumstances and to detect and extrapolate patterns and knowledge.

Data Mining: Is a process of discovering various models, summaries, and derived values, knowledge from a large database. Another definition is that it is the process of using statistical, mathematical, logical, AI methods and tools to extract useful information from large databases.

Intelligent Big Data Analytics: Is science and technology about collecting, organizing, and analyzing big data to discover patterns, knowledge, and intelligence as well as other information within the big data based on artificial intelligence and intelligent systems.

Big Data: Is data with at least one of the ten big characteristics consisting of big volume, big velocity, big variety, big veracity, big intelligence, big analytics, big infrastructure, big service, big value, and big market.

Stare Decisis: The requirement that a decision maker in a court makes decisions consistent with his or her past decisions, decisions of courts at the same level and those of higher courts.

Case-Based Reasoning: It is a kind of experience reasoning. It is based on the principle of “similar problems have similar solutions” A case is an experienced problem and its solution. Abductive case-based reasoning is a more practical explanation-oriented reasoning paradigm.

Complete Chapter List

Search this Book:
Reset