Ethical Behavior and Legal Regulations in Artificial Intelligence (Part Two): Representation of Law and Ethics in Intelligent Systems

Ethical Behavior and Legal Regulations in Artificial Intelligence (Part Two): Representation of Law and Ethics in Intelligent Systems

Mandy Goram, Dirk Veiel
DOI: 10.4018/978-1-7998-4894-3.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Intelligent systems and assistants should help users to complete tasks and support them at work, on the road and at home. At the same time, these systems are becoming increasingly sophisticated and autonomous in their decisions and are already taking over simple tasks from us today. In order to not lose control over their own data and to avoid the risk of user manipulation, these systems must comply with ethical and legal guidelines. In this chapter, the authors describe a novel generic approach and its realization for the development of intelligent systems that allow flexible modeling of ethical and legal aspects.
Chapter Preview
Top

Introduction

Intelligent systems and assistants should help users to complete tasks and support them at work, on the road, and at home. At the same time, these systems are becoming increasingly sophisticated and autonomous in their decisions, and are already taking over simple tasks from us today. In order to not lose control over their own data, and to avoid the risk of user manipulation, these systems must comply with ethical and legal guidelines. Self-sovereignty and privacy should be preserved. We have already explained what this means in Ethical Behavior and Legal Regulations in Artificial Intelligence (Part One): Supporting Sovereignty of Users while Using Complex and Intelligent Systems.

In this chapter, we describe a novel generic approach and its realization for the development of intelligent systems that allow flexible modeling of ethical and legal aspects. These aspects can be integrated subsequently into the intelligent core system. When the current development requires changes, the integrated aspects can be adapted accordingly. On the one hand, the approach enables the stakeholders to develop and provide intelligent systems. On the other hand, it tries to support users’ sovereignty.

In contrast to the approaches that we described in Part 1, we research and develop a generic system approach for intelligent systems that contains transparency and explainability as per default. The generic approach offers basic functionalities that are needed across domains, and that allow opportunity to specify ethical and legal rules. This enables related stakeholders to realize intelligent systems for their specific domains that take ethical and legal aspects into account. Stakeholders can integrate required extensions without changing the generic core system, but can use best practices from different domains; for instance, collaboration policies, legal regulations, and device support.

We realize the generic system approach by implementing an extendable context-based adaptive system environment (eCBASE) for the legally compliant development and deployment of domain-specific context-based adaptive applications. For this, we use the context-based adaptive collaboration system CONTact and its existing adaptation runtime environment (ARE) to develop eCBASE and support the related stakeholder of the development process. To do so, we extended the existing domain model and the functionalities of CONTact that we introduced in Part 1. Stakeholders who provide intelligent systems can specify how eCBASE should support users in specific situations. When users have to make decisions in such situations, they have to be aware of related consequences. Therefore, they need personalized and situation-specific explanations that eCBASE has to offer.

Key Terms in this Chapter

(Self-)Sovereignty: Users are the owners of their data and can control them in processing of data. They give freely consent to which data are used and for what purpose. They can withdraw consent of use for particular functionalities or purposes at any time.

Context: Context includes all information that can be used to describe a situation. This includes socio-technical, physical, and space-time information as well as the legal constraints that exist in the specific situation. This information provides the necessary framework for action and the implementation of appropriate policies and interventions.

Domain Model: A domain model is an abstract formal description of a socio-technical system. It contains concepts and relationships that represent functionalities and objects. Each domain from the real world is usually mapped to its own domain model when it is formalized for computing.

Ontology: An ontology is a formal specification of a certain domain that describes a set of concepts, relationships and formal axioms that restrict the interpretation of concept instances.

Artifact: Artifacts are artificial objects created by humans. Every document and process created in a system represents an artifact. In software development, the documents, software, and other objects to be created are called artifacts.

Complete Chapter List

Search this Book:
Reset