Machine (Technology) Ethics: The Theoretical and Philosophical Paradigms

Machine (Technology) Ethics: The Theoretical and Philosophical Paradigms

Ben Tran
Copyright: © 2016 |Pages: 14
DOI: 10.4018/IJT.2016070105
This article was retracted

Abstract

At the foundational level, for computer programmers, the code that programmers build and built into, are based on instructions, and the purpose of the program it later services. But computers do not have their own discretion beyond what humans incorporate into such systems and are essentially limited only to the extent its writer chooses. However, ABET to date, does not provide assurance or require accredited colleges and universities programs in applied science, computing, engineering, and engineering technology to take ethics courses or offer ethics courses nor train graduates in ethics. Yet, graduates, who then become practitioners, and ethical agents, are expected to be ethical agents. Hence, the purpose of this article is on machine ethics, specifically, on the theoretical and philosophical meaning of ethics—different types of ethics and utilitarianism. In addition to exploring the theoretical and philosophical paradigm of ethics, technology will be defined, in relations to machine ethics.
Article Preview
Top

Introduction

At the foundational level, for computer programmers, the code that programmers build and built into, are based on instructions and the purpose of the program it later services. Hence, one can argue that computers do not have their own discretion beyond what humans incorporate into such systems, and the end results of intentional actions of any code are essentially limited only to the extent its writer chooses. In this mode, computers can be described as tools, or enablers, of what the users want to do. The entire accountability for ethical conduct rests with its creators. However, this is not to say that humans do not use these enablers in an immoral way. During the 1970s, the leadership of Equity Funding, a U.S. life insurance company, decided to create fake records of life policies, reinsure them to obtain cash from reinsurers by declaring some of these fake insured entities as dead. Equity Funding hid these data from their auditors in what they called File 99 (Raval, 2014). Manipulating the code or data to commit a criminal or immoral act is possible even when computers are no more than tools and the perpetrators come from the line of users and creators.

Furthermore, in an era of advanced computing in which computers are increasingly taking over far more sophisticated roles has arrived and continues to expand. Robotics and nanotechnology are just two examples of developing disciplines that will push the role of computers and computing well past the era of enablers. According to Ray Kurzweil, by the year 2045, “human intelligence will enhance a billion-fold tanks to high-tech brain extension” (Wolfe, 2014). Kurzweil refers to this phenomenon as the singularity, a point at which humans and computers will merge. This sort of one in two will create serious challenges in the allocation of moral accountability between the two. To develop insights into ethical dilemmas of the new world of advanced technologies and their applications, a whole new field, called moral machines or machine ethics, is and has been emerging. Machine ethics is a disciple that attempts to address the ethics of artificial intelligence (AI), and while AI has slowly been transitioning from fiction and movies to the real world over the past several decades, attempts to articulate its moral dimensions are relatively recent. Moral Machines: Teaching Robots Right from Wrong (Wallach & Allen, 2008) and Machine Ethics (Anderson & Anderson, 2011) are two significant publications offering a discussion of morality in the context of smart machines. This leap from computer ethics to machine ethics is necessary due to the elevated status of computer from mere enablers to intelligent collaborators with humans.

According to Anderson and Anderson (2013) and Raval (2014), James Moor considers computing machines that are basically enablers of tasks as normative agents, but not necessarily ethical agents, because they merely perform the tasks as specified and their performance can be objectively assessed. Any development of machines beyond this state requires consideration of the ethical dimension that the embedded intelligence should reflect in its design. For this, Moor suggests three ways to classify issues of moral values in machines: ethical impact agents, implicit ethical agents, and explicit ethical agents (Anderson & Anderson, 2011, pp. 13-20; Anderson & Anderson, 2013). Each category progressively assigns a greater moral role to machines. Hence, the purpose of this article is on machine ethics, specifically, on the theoretical and philosophical meaning of ethics—different types of ethics and utilitarianism. In addition to exploring the theoretical and philosophical paradigm of ethics, technology will be defined, in relations to machine ethics.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 2 Issues (2022)
Volume 12: 2 Issues (2021)
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing